Step 3: Specify analysis options
A new worksheet will be added to your workbook. Analysis Setup will be automatically opened, in the setup tab specify the survey results.
Click on Data to specify the data required for this analysis.
Click on the Train the software will let you pick the options for training the given model. Training is a step where we split the data into groups a train data set and a test data set.
Click on the Tuning to identifying the best set of hyperparameters that gives the best fit for the given model.

Click the Verify tab to ensure all the inputs are okay and shown in a green checkmark.
Step 4: Generate analysis result
Click OK and then click Compute Outputs to get the final results.
Interpretation of Results
- The
Bagging model used is a Parallel Random Forest, which is an ensemble
learning method.
- Random
Forest works by combining multiple decision trees trained on bootstrapped
samples of the dataset to reduce variance and improve accuracy.
- The
reported accuracy of the model is 75.69%, which is a decent performance
level.
- This
suggests that the model is correctly predicting the target variable in
approximately 3 out of 4 cases.
- If
higher accuracy is required, tuning hyperparameters (e.g., number of
trees, max depth, feature selection) could improve results.
- A hyperparameter
tuning plot is visible, showing performance across different
hyperparameter settings.
- The
accuracy curve indicates that an optimal parameter set was found, but
further optimization could help.
- Adjusting
the number of estimators (trees) or max depth may improve accuracy.
- The Variable
Importance Plot indicates the most significant features influencing the
model’s predictions.
- Features
with higher importance scores contribute more to decision-making in the
Random Forest model.
- This
insight helps in selecting relevant variables and possibly eliminating
less important ones to improve efficiency.
- The
table provides predicted values based on the trained model.
- The
dataset consists of multiple independent variables (like displacement,
horsepower, speed, and gears).
- By
analyzing errors and misclassifications, we can determine if bias
reduction or further tuning is required.
- The
model is performing well with 75.69% accuracy, indicating a reliable
prediction system.
- Further
fine-tuning through cross-validation and feature selection could improve
the model.
- If
accuracy needs improvement, boosting techniques (like XGBoost or AdaBoost)
may be explored as an alternative.