Step 3: Specify analysis options
A new worksheet will be added to your workbook. Analysis Setup will be automatically opened, in the setup tab specify the survey results.
Click on Data to specify the data required for this analysis.
Click on the Train the software will let you pick the options for training the given model. Training is a step where we split the data into groups a train data set and a test data set.
Click on the Tuning to identifying the best set of hyperparameters that gives the best fit for the given model.
Click the Verify tab to ensure all the inputs are okay and shown in a green checkmark.

Step 4: Generate analysis result
Click OK and then click Compute Outputs to get the final results.
Interpretation of Results
- The
chosen prototype model is k-NN, a classification algorithm that predicts
outcomes based on the closest neighbors in the dataset.
- The accuracy
of the model is 73.73%, meaning that roughly 74 out of 100 predictions are
correct.
- Objective:
The model is used for classification.
- Data
Features Used: Variables like x1, x2, x3, x4, x5, etc.
- Hyperparameters
Tuned:
- Tuning
Length: 3 (suggesting the model was optimized by testing 3 different
configurations).
- Tuning
Method: Random Search to find optimal parameters.
- Distance
Metric: Likely Euclidean or Minkowski distance, as commonly used in k-NN
models.
- Training
Data Size: 80 rows, divided into multiple classes.
- Resampling
Method Used: Likely cross-validation to avoid overfitting.
- Model
Predictions Table:
- The
model predicts values based on the nearest neighbors.
- Predictions
are influenced by input variables (x1 to x5).
- Accuracy
Scores for Different Runs:
- Scores
range between 0.5735 to 0.6553, showing variance in performance.
- The
final model's best accuracy is 73.73% after resampling.
- Validation
Method: Likely out-of-bag error validation, ensuring robustness.
- A Variable
Importance Plot is present, showing which input variables had the most
influence.
- Key
influential features seem to be x2, x3, x4, which significantly impact the
prediction outcome.
- The model
performs decently but has scope for improvement.
- The next
steps could be:
- Tuning
hyperparameters further, such as adjusting k values or using different
distance metrics.
- Trying
alternative models, such as Decision Trees or Random Forest, to compare
accuracy.
- Feature
selection & data preprocessing to improve prediction reliability.