Complex Internal Structures
Title: Complex Internal Structures
Research Question: Can increasing the complexity of neurons in artificial neural networks enhance their processing power?
Methodology: The study introduces artificial neurons with arbitrarily complex internal structures. These neurons can be described in terms of internal variables, activation functions, and characteristic functions. The information capacity of attractor networks composed of these generalized neurons is analyzed. A specific class of generalized neurons is used to relate attractor networks to standard three-layer feed-forward networks.
Results: The study demonstrates that the complexity of neurons can indeed enhance their processing power. The information capacity of attractor networks reaches the maximum allowed bound. A simple example from the domain of pattern recognition shows the increased computational power of these neurons. The study also presents a specific class of generalized neurons that relates attractor networks to three-layer feed-forward networks, suggesting that the maximum information capacity of these networks is 2 bits per weight.
Implications: The research suggests that the internal complexity of neurons plays a significant role in information processing. It provides a framework for studying the effects of increasing neuron complexity and offers insights into the potential benefits of more complex processing units. The study also establishes a correspondence between attractor networks and three-layer feed-forward networks, which could have implications for the design and analysis of neural networks.
Link to Article: https://arxiv.org/abs/0108009v1 Authors: arXiv ID: 0108009v1