Step 3: Specify analysis options
A new worksheet will be added to your workbook. Analysis Setup will be automatically opened, in the setup tab specify the survey results.
Click on Data to specify the data required for this analysis.
Click on the Train the software will let you pick the options for training the given model. Training is a step where we split the data into groups a train data set and a test data set.
Click on the Tuning to identifying the best set of hyperparameters that gives the best fit for the given model.
Click the Verify tab to ensure all the inputs are okay and shown in a green checkmark.

Step 4: Generate analysis result
Click OK and then click Compute Outputs to get the final results.
Interpretation of Results
- The
model used is AdaBoost Classification Trees (M1).
- Accuracy:
88.67%, which indicates a good classification performance.
- The
model was trained using Bootstrapping (Boost) as the training method.
- The
optimization graph suggests the model's tuning process to find the best
hyperparameters.
- The
selected parameters were optimized randomly.
- This
chart displays the relative importance of different features used in the
model.
- Higher
importance values indicate variables that contribute more significantly to
the predictions.
- Data
Type: Classification.
- Features
Used: gear, carb, vs, am, disp, qsec.
- Preferred
Measure: Accuracy.
- Subsampling:
None (suggests that all data points were used in training).
- Model
Tuning: Random.
- Response
Variable: gear (the categorical target variable).
- Training
Data: 80% split, 20% missing values handled.
- Exclusion
of Zero Rows: None.
- Resampling
Bootstraps: 84 repetitions.
- Random
seed values for resampling: 23, 29, 29, 29, 29.
- Different
methods evaluated (Breiman, Freund, Zhu, etc.), where the accuracy and
Kappa statistics are shown for each method.
- High
Accuracy (88.67%): The model performs well in classifying the response
variable.
- Feature
Importance: The model determines which variables contribute the most to
predictions.
- Resampling
Stability: Multiple bootstraps ensure robustness.
- Potential
Overfitting Risk: If accuracy is significantly higher than expected,
cross-validation should be checked.