Sigma Magic allows users to build Feedforward Neural Networks (FNN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN) for different use cases like classification, regression, and time-series forecasting.
Sigma Magic supports gradient descent optimization techniques like Stochastic Gradient Descent (SGD), Adam, RMSprop, and Momentum-based methods to improve learning efficiency.
For classification: Cross-Entropy Loss
For regression: Mean Squared Error (MSE), Mean Absolute Error (MAE)
You can use regularization techniques like L1/L2 (Ridge & Lasso), dropout layers, early stopping, and data augmentation to prevent overfitting in neural networks.
Sigma Magic supports activation functions such as ReLU (Rectified Linear Unit), Sigmoid, Tanh, Softmax, and Leaky ReLU, depending on the neural network layer type.