Logo BSU

Пожалуйста, используйте этот идентификатор, чтобы цитировать или ссылаться на этот документ: https://elib.bsu.by/handle/123456789/306249
Полная запись метаданных
Поле DCЗначениеЯзык
dc.contributor.authorDiamond, Justin
dc.date.accessioned2023-12-12T12:42:17Z-
dc.date.available2023-12-12T12:42:17Z-
dc.date.issued2023
dc.identifier.citationPattern Recognition and Information Processing (PRIP’2023). Artificial Universe: New Horisont : Proceedings of the 16 th International Conference, Belarus, Minsk, October 17–19, 2023 / Belarusian State University : eds. A. Nedzved, A. Belotserkovsky. – Minsk : BSU, 2023. – Pp. 273-278.
dc.identifier.isbn978-985-881-522-6
dc.identifier.urihttps://elib.bsu.by/handle/123456789/306249-
dc.description.abstractIn machine learning and neural networks, non-linear transformations have been pivotal in capturing intricate patterns within data. These transformations are traditionally instantiated via activation functions such as Rectified Linear Unit (ReLU), Sigmoid, and Hyperbolic Tangent (Tanh). In this work, we introduce DiagonalizeGNN, an approach that changes the introduction of non-linearities in Graph Neural Networks (GNNs). Unlike traditional methods that rely on pointwise activation functions, DiagonalizeGNN employs Singular Value Decomposition (SVD) to incorporate global, non-piecewise non-linearities across an entire graph’s feature matrix. We provide the formalism of this method and empirical validation on a synthetic dataset, we demonstrate that our method not only achieves comparable performance to existing models but also offers additional benefits such as higher stability and potential for capturing more complex relationships. This novel approach opens up new avenues for research and offers significant implications for the future of non-linear transformations in machine learning
dc.language.isoen
dc.publisherMinsk : BSU
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectЭБ БГУ::ЕСТЕСТВЕННЫЕ И ТОЧНЫЕ НАУКИ::Кибернетика
dc.subjectЭБ БГУ::ЕСТЕСТВЕННЫЕ И ТОЧНЫЕ НАУКИ::Математика
dc.titleLearnable Global Layerwise Nonlinearities Without Activation Functions
dc.typeconference paper
Располагается в коллекциях:2023. Pattern Recognition and Information Processing (PRIP’2023). Artificial Intelliverse: Expanding Horizons

Полный текст документа:
Файл Описание РазмерФормат 
273-278.pdf443,73 kBAdobe PDFОткрыть
Показать базовое описание документа Статистика Google Scholar



Все документы в Электронной библиотеке защищены авторским правом, все права сохранены.