Detecting overfitting is technically not possible unless we test the data.
One of the leading indicators of an overfit model is its inability to generalize datasets. The most obvious way to start the process of detecting overfitting machine learning models is to segment the dataset. It’s done so that we can examine the model's performance on each set of data to spot overfitting when it occurs and see how the training process works.
K-fold cross-validation is one of the most popular techniques commonly used to detect overfitting. We split the data points into k equally sized subsets in K-folds cross-validation, called "folds." One split subsets act as the testing set, and the remaining folds will train the model.
The model is trained on a limited sample to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model. One fold acts as a validation set in each turn.
After all the iterations, we average the scores to assess the performance of the overall model.
Learn more at What is overfitting in Deep Learning.