Can you combine features in neural networks?
Yes, you can. There are scenarios where deep learning tasks demand combining the output or the intermediate results of separately trained neural networks. A few of these applications are Ensemble learning, learning from two different kinds of input for the same task, etc. There are two ways to combine feature vectors in a neural network.
Concatenated vector method
One of the most common and easiest ways is to concatenate the feature vectors and train the network using this concatenated vector. A major drawback in this method is the length of the resultant vector which is the sum of individual vectors leading to a huge increase in dimensionality. Using higher dimension input features to train, networks are more susceptible to overfitting and hence will require increasing the training samples. Another method is to combine the two feature vectors into one by using averaging or weighted averaging.
Can you combine features in neural networks?
To perform either operation, both vectors must be of the same dimension, this can be achieved by passing them through a layer of equal number of neurons before averaging. During weighted averaging the weights associated with each vector can be learned during the training process, this further provides additional information such as importance scores and measures of the contribution of individual feature vectors. The drawback of this approach is the loss of information that is caused as a result of averaging operations. The choice of combination layer strategy depends on the kind of problem, input datasets, and architectural and computation limitations when combining features in neural networks.
Devron is a next-generation federated learning and data science platform that enables decentralized analytics. Learn more about our solutions, read more of our knowledge base articles, about our federated learning platform, or schedule a demo with us today.