Komposisi Fungsi Nonlinier Antar Ruang Vektor Berdimensi Tinggi Dalam Arsitektur Neural Network
DOI:
https://doi.org/10.30605/proximal.v8i2.6538Keywords:
neural network, transformasi non linear, geometri data, ruang vektor berdimensi tinggi, interpretabilitas modelAbstract
Penelitian ini menyajikan pendekatan konseptual dan eksperimental dalam memodelkan jaringan saraf tiruan (neural network) sebagai suatu transformasi nonlinier berlapis antar ruang vektor berdimensi tinggi. Dengan mendasarkan pada kerangka matematis komposisi fungsi linier dan aktivasi nonlinier, studi ini memetakan bagaimana representasi data secara spasial berubah melalui setiap lapisan jaringan. Menggunakan data sintetis berdimensi tinggi dan arsitektur multilayer perceptron (MLP), transformasi internal jaringan dianalisis baik secara visual melalui proyeksi PCA dan t-SNE, maupun secara kuantitatif melalui pengukuran perubahan metrik spasial. Hasil eksperimen menunjukkan bahwa jaringan saraf tidak hanya berfungsi sebagai aproksimator fungsi, tetapi juga secara aktif membentuk ulang geometri manifold data untuk meningkatkan keterpisahan antar kelas. Fungsi aktivasi seperti ReLU dan tanh terbukti memiliki dampak signifikan terhadap struktur representasi, dengan ReLU menghasilkan sparsifikasi spasial yang lebih kuat. Temuan ini mendemonstrasikan bahwa pemahaman terhadap dinamika spasial dalam neural network dapat memberikan fondasi yang lebih transparan dalam interpretasi model, serta membuka arah baru dalam riset interpretabilitas deep learning berbasis pendekatan matematis dan geometris.
Downloads
References
Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4), 303–314.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
https://www.deeplearningbook.org
Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2), 251–257.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
https://doi.org/10.1038/nature14539
Montavon, G., Samek, W., & Müller, K.-R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.
https://doi.org/10.1016/j.dsp.2017.10.011
Olah, C., Satyanarayan, A., Johnson, I., et al. (2018). The building blocks of interpretability. Distill. https://distill.pub/2018/building-blocks/
Poggio, T., & Mhaskar, H. (2017). Deep learning: Mathematical foundations and theoretical perspectives. Annual Review of Statistics and Its Application, 4(1), 193–220.
https://doi.org/10.1146/annurev-statistics-060116-054711
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.
https://doi.org/10.1016/j.neunet.2014.09.003
Zhang, Q., Yang, L., & Chen, Z. (2020). Understanding deep neural networks from geometric perspectives: A review. Neurocomputing, 417, 418–432.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Lutfan Anas Zahir, Danang Wijanarko, Danang Hadi Nugroho

This work is licensed under a Creative Commons Attribution 4.0 International License.
In submitting the manuscript to the journal, the authors certify that:
- They are authorized by their co-authors to enter into these arrangements.
- The work described has not been formally published before, except in the form of an abstract or as part of a published lecture, review, thesis, or overlay journal.
- That it is not under consideration for publication elsewhere,
- That its publication has been approved by all the author(s) and by the responsible authorities – tacitly or explicitly – of the institutes where the work has been carried out.
- They secure the right to reproduce any material that has already been published or copyrighted elsewhere.
- They agree to the following license and copyright agreement.
License and Copyright Agreement
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under Creative Commons Attribution License (CC BY 4.0) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.