Interpretation of ROC AUC scoreWhat is the right algorithm to detect segmentations of a line chart?Variance...
Do copyright notices need to be placed at the beginning of a file?
Interpretation of ROC AUC score
How to determine if a hyphen (-) exists inside a column
Why does splatting create a tuple on the rhs but a list on the lhs?
How was Daenerys able to legitimise Gendry?
Removing the last element of a list
Security vulnerabilities of POST over SSL
Testing using real data of the customer
...And they were stumped for a long time
Is there an idiom that means that you are in a very strong negotiation position in a negotiation?
Why is unzipped directory exactly 4.0k (much smaller than zipped file)?
Who knighted this character?
Creating second map without labels using QGIS?
Why do Russians almost not use verbs of possession akin to "have"?
Expected maximum number of unpaired socks
Does an eye for an eye mean monetary compensation?
Is there a simple example that empirical evidence is misleading?
Would Buddhists help non-Buddhists continuing their attachments?
Why does FOO=bar; export the variable into my environment
Why did Jon Snow admit his fault in S08E06?
Dad jokes are fun
Sorting with IComparable design
Co-author wants to put their current funding source in the acknowledgements section because they edited the paper
What would prevent living skin from being a good conductor for magic?
Interpretation of ROC AUC score
What is the right algorithm to detect segmentations of a line chart?Variance in cross validation score / model selectionOutlier detection by unsupervised algorithm: Fraud DetectionROC curve for different hyperparameters of `RandomForestClassifier`?different results with MEKA vs Scikit-learn!Interpretation of variable or feature importance in Random ForestGradient descent multidimensional linear regression - does learning rate affects concurrency?Decent ROC, but horrible Precision-Recall curveEvaluating the test setLooking for a classification (?) algorithm for linearly separable but unlabeled data points
$begingroup$
i tried to evaluate 6 models and after plotting , this what i get :
So i'm wondering , if those results are "Right" ?
Thank's in advance.
random-forest svm logistic-regression model-selection
New contributor
$endgroup$
add a comment |
$begingroup$
i tried to evaluate 6 models and after plotting , this what i get :
So i'm wondering , if those results are "Right" ?
Thank's in advance.
random-forest svm logistic-regression model-selection
New contributor
$endgroup$
$begingroup$
Can you give some more information on your project? As Juan said, these AUC numbers are suspiciously good, and this chart alone is not very informative.
$endgroup$
– Upper_Case
10 hours ago
$begingroup$
I have to develop a model to predict whether a person will get hired as DS or not , and here is just some model evaluation , to see what's the best model to use for my task .
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
i tried to evaluate 6 models and after plotting , this what i get :
So i'm wondering , if those results are "Right" ?
Thank's in advance.
random-forest svm logistic-regression model-selection
New contributor
$endgroup$
i tried to evaluate 6 models and after plotting , this what i get :
So i'm wondering , if those results are "Right" ?
Thank's in advance.
random-forest svm logistic-regression model-selection
random-forest svm logistic-regression model-selection
New contributor
New contributor
edited 1 hour ago
Esmailian
4,599422
4,599422
New contributor
asked 12 hours ago
DimiDimi
332
332
New contributor
New contributor
$begingroup$
Can you give some more information on your project? As Juan said, these AUC numbers are suspiciously good, and this chart alone is not very informative.
$endgroup$
– Upper_Case
10 hours ago
$begingroup$
I have to develop a model to predict whether a person will get hired as DS or not , and here is just some model evaluation , to see what's the best model to use for my task .
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
Can you give some more information on your project? As Juan said, these AUC numbers are suspiciously good, and this chart alone is not very informative.
$endgroup$
– Upper_Case
10 hours ago
$begingroup$
I have to develop a model to predict whether a person will get hired as DS or not , and here is just some model evaluation , to see what's the best model to use for my task .
$endgroup$
– Dimi
10 hours ago
$begingroup$
Can you give some more information on your project? As Juan said, these AUC numbers are suspiciously good, and this chart alone is not very informative.
$endgroup$
– Upper_Case
10 hours ago
$begingroup$
Can you give some more information on your project? As Juan said, these AUC numbers are suspiciously good, and this chart alone is not very informative.
$endgroup$
– Upper_Case
10 hours ago
$begingroup$
I have to develop a model to predict whether a person will get hired as DS or not , and here is just some model evaluation , to see what's the best model to use for my task .
$endgroup$
– Dimi
10 hours ago
$begingroup$
I have to develop a model to predict whether a person will get hired as DS or not , and here is just some model evaluation , to see what's the best model to use for my task .
$endgroup$
– Dimi
10 hours ago
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Did you evaluate the results in the training set? Or in the test set?
Those results are outstandingly good! Suspiciously good.
I think you tried your results in the training set only, so your results reflect overfitting on your data, which means your model learned the set, it was not generalized (which means it is unapplicable to any other dataset you may encounter in the future, which is not useful).
For comparing ROC between methodologies you should model them being careful for overfitting and try them on a test dataset (a dataset which you never knew before, which can be obtained partitioning your dataset).
In that way your comparison is not measuring which model learns by memory your data.
$endgroup$
$begingroup$
Well , I did split my data into Training and testing and this evaluation has been done only on the Training.
$endgroup$
– Dimi
10 hours ago
$begingroup$
Yes! That is what happens! You should assess your model in the dataset your model doesn't know (test dataset), so you can measure the generallity of it.
$endgroup$
– Juan Esteban de la Calle
10 hours ago
$begingroup$
Alright , thank's a lot
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
I'm not sure that AUC is the right value to use to compare these models. Have a look at this question for a bit more detail.
In any case, the AUC of your training data is not a very informative piece of information, and assessing the performance of your model on the training set isn't enough to determine how "right" your model(s) may be, no matter how you go about it. Such a comparison could be done on a test set, at the earliest, and better still, totally out-of-sample data (as final model training usually includes re-training on the full data set).
Finally, model performance (in application) will be defined by choosing a specific cut point for your predictors (and the corresponding true positive/false positive tradeoff) rather than by looking at the overall ability of your model to discern between outcomes across many possible cut points. Overall AUC may still be interesting, but applied model performance is a more precise question (this question has some good information on that).
$endgroup$
$begingroup$
Intresting , Thank you
$endgroup$
– Dimi
10 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Dimi is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f52325%2finterpretation-of-roc-auc-score%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Did you evaluate the results in the training set? Or in the test set?
Those results are outstandingly good! Suspiciously good.
I think you tried your results in the training set only, so your results reflect overfitting on your data, which means your model learned the set, it was not generalized (which means it is unapplicable to any other dataset you may encounter in the future, which is not useful).
For comparing ROC between methodologies you should model them being careful for overfitting and try them on a test dataset (a dataset which you never knew before, which can be obtained partitioning your dataset).
In that way your comparison is not measuring which model learns by memory your data.
$endgroup$
$begingroup$
Well , I did split my data into Training and testing and this evaluation has been done only on the Training.
$endgroup$
– Dimi
10 hours ago
$begingroup$
Yes! That is what happens! You should assess your model in the dataset your model doesn't know (test dataset), so you can measure the generallity of it.
$endgroup$
– Juan Esteban de la Calle
10 hours ago
$begingroup$
Alright , thank's a lot
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
Did you evaluate the results in the training set? Or in the test set?
Those results are outstandingly good! Suspiciously good.
I think you tried your results in the training set only, so your results reflect overfitting on your data, which means your model learned the set, it was not generalized (which means it is unapplicable to any other dataset you may encounter in the future, which is not useful).
For comparing ROC between methodologies you should model them being careful for overfitting and try them on a test dataset (a dataset which you never knew before, which can be obtained partitioning your dataset).
In that way your comparison is not measuring which model learns by memory your data.
$endgroup$
$begingroup$
Well , I did split my data into Training and testing and this evaluation has been done only on the Training.
$endgroup$
– Dimi
10 hours ago
$begingroup$
Yes! That is what happens! You should assess your model in the dataset your model doesn't know (test dataset), so you can measure the generallity of it.
$endgroup$
– Juan Esteban de la Calle
10 hours ago
$begingroup$
Alright , thank's a lot
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
Did you evaluate the results in the training set? Or in the test set?
Those results are outstandingly good! Suspiciously good.
I think you tried your results in the training set only, so your results reflect overfitting on your data, which means your model learned the set, it was not generalized (which means it is unapplicable to any other dataset you may encounter in the future, which is not useful).
For comparing ROC between methodologies you should model them being careful for overfitting and try them on a test dataset (a dataset which you never knew before, which can be obtained partitioning your dataset).
In that way your comparison is not measuring which model learns by memory your data.
$endgroup$
Did you evaluate the results in the training set? Or in the test set?
Those results are outstandingly good! Suspiciously good.
I think you tried your results in the training set only, so your results reflect overfitting on your data, which means your model learned the set, it was not generalized (which means it is unapplicable to any other dataset you may encounter in the future, which is not useful).
For comparing ROC between methodologies you should model them being careful for overfitting and try them on a test dataset (a dataset which you never knew before, which can be obtained partitioning your dataset).
In that way your comparison is not measuring which model learns by memory your data.
answered 11 hours ago
Juan Esteban de la CalleJuan Esteban de la Calle
1,328324
1,328324
$begingroup$
Well , I did split my data into Training and testing and this evaluation has been done only on the Training.
$endgroup$
– Dimi
10 hours ago
$begingroup$
Yes! That is what happens! You should assess your model in the dataset your model doesn't know (test dataset), so you can measure the generallity of it.
$endgroup$
– Juan Esteban de la Calle
10 hours ago
$begingroup$
Alright , thank's a lot
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
Well , I did split my data into Training and testing and this evaluation has been done only on the Training.
$endgroup$
– Dimi
10 hours ago
$begingroup$
Yes! That is what happens! You should assess your model in the dataset your model doesn't know (test dataset), so you can measure the generallity of it.
$endgroup$
– Juan Esteban de la Calle
10 hours ago
$begingroup$
Alright , thank's a lot
$endgroup$
– Dimi
10 hours ago
$begingroup$
Well , I did split my data into Training and testing and this evaluation has been done only on the Training.
$endgroup$
– Dimi
10 hours ago
$begingroup$
Well , I did split my data into Training and testing and this evaluation has been done only on the Training.
$endgroup$
– Dimi
10 hours ago
$begingroup$
Yes! That is what happens! You should assess your model in the dataset your model doesn't know (test dataset), so you can measure the generallity of it.
$endgroup$
– Juan Esteban de la Calle
10 hours ago
$begingroup$
Yes! That is what happens! You should assess your model in the dataset your model doesn't know (test dataset), so you can measure the generallity of it.
$endgroup$
– Juan Esteban de la Calle
10 hours ago
$begingroup$
Alright , thank's a lot
$endgroup$
– Dimi
10 hours ago
$begingroup$
Alright , thank's a lot
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
I'm not sure that AUC is the right value to use to compare these models. Have a look at this question for a bit more detail.
In any case, the AUC of your training data is not a very informative piece of information, and assessing the performance of your model on the training set isn't enough to determine how "right" your model(s) may be, no matter how you go about it. Such a comparison could be done on a test set, at the earliest, and better still, totally out-of-sample data (as final model training usually includes re-training on the full data set).
Finally, model performance (in application) will be defined by choosing a specific cut point for your predictors (and the corresponding true positive/false positive tradeoff) rather than by looking at the overall ability of your model to discern between outcomes across many possible cut points. Overall AUC may still be interesting, but applied model performance is a more precise question (this question has some good information on that).
$endgroup$
$begingroup$
Intresting , Thank you
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
I'm not sure that AUC is the right value to use to compare these models. Have a look at this question for a bit more detail.
In any case, the AUC of your training data is not a very informative piece of information, and assessing the performance of your model on the training set isn't enough to determine how "right" your model(s) may be, no matter how you go about it. Such a comparison could be done on a test set, at the earliest, and better still, totally out-of-sample data (as final model training usually includes re-training on the full data set).
Finally, model performance (in application) will be defined by choosing a specific cut point for your predictors (and the corresponding true positive/false positive tradeoff) rather than by looking at the overall ability of your model to discern between outcomes across many possible cut points. Overall AUC may still be interesting, but applied model performance is a more precise question (this question has some good information on that).
$endgroup$
$begingroup$
Intresting , Thank you
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
I'm not sure that AUC is the right value to use to compare these models. Have a look at this question for a bit more detail.
In any case, the AUC of your training data is not a very informative piece of information, and assessing the performance of your model on the training set isn't enough to determine how "right" your model(s) may be, no matter how you go about it. Such a comparison could be done on a test set, at the earliest, and better still, totally out-of-sample data (as final model training usually includes re-training on the full data set).
Finally, model performance (in application) will be defined by choosing a specific cut point for your predictors (and the corresponding true positive/false positive tradeoff) rather than by looking at the overall ability of your model to discern between outcomes across many possible cut points. Overall AUC may still be interesting, but applied model performance is a more precise question (this question has some good information on that).
$endgroup$
I'm not sure that AUC is the right value to use to compare these models. Have a look at this question for a bit more detail.
In any case, the AUC of your training data is not a very informative piece of information, and assessing the performance of your model on the training set isn't enough to determine how "right" your model(s) may be, no matter how you go about it. Such a comparison could be done on a test set, at the earliest, and better still, totally out-of-sample data (as final model training usually includes re-training on the full data set).
Finally, model performance (in application) will be defined by choosing a specific cut point for your predictors (and the corresponding true positive/false positive tradeoff) rather than by looking at the overall ability of your model to discern between outcomes across many possible cut points. Overall AUC may still be interesting, but applied model performance is a more precise question (this question has some good information on that).
edited 10 hours ago
answered 10 hours ago
Upper_CaseUpper_Case
22615
22615
$begingroup$
Intresting , Thank you
$endgroup$
– Dimi
10 hours ago
add a comment |
$begingroup$
Intresting , Thank you
$endgroup$
– Dimi
10 hours ago
$begingroup$
Intresting , Thank you
$endgroup$
– Dimi
10 hours ago
$begingroup$
Intresting , Thank you
$endgroup$
– Dimi
10 hours ago
add a comment |
Dimi is a new contributor. Be nice, and check out our Code of Conduct.
Dimi is a new contributor. Be nice, and check out our Code of Conduct.
Dimi is a new contributor. Be nice, and check out our Code of Conduct.
Dimi is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f52325%2finterpretation-of-roc-auc-score%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Can you give some more information on your project? As Juan said, these AUC numbers are suspiciously good, and this chart alone is not very informative.
$endgroup$
– Upper_Case
10 hours ago
$begingroup$
I have to develop a model to predict whether a person will get hired as DS or not , and here is just some model evaluation , to see what's the best model to use for my task .
$endgroup$
– Dimi
10 hours ago