How to count response time max avg for nginx logs?How to extract logs between two time stampsHow to get a...

What do these commands specifically do?

How can I download a file through 2 SSH connections?

How much does Commander Data weigh?

Why doesn't 'd /= d' throw a division by zero exception?

How long do you think advanced cybernetic implants would plausibly last?

Changing JPEG to RAW to use on Lightroom?

Very slow boot time and poor perfomance

How does the OS tell whether an "Address is already in use"?

If the Shillelagh cantrip is applied to a club with non-standard damage dice, what is the resulting damage dice?

I don't have the theoretical background in my PhD topic. I can't justify getting the degree

Why are non-collision-resistant hash functions considered insecure for signing self-generated information

How many birds in the bush?

Redacting URLs as an email-phishing preventative?

What stops you from using fixed income in developing countries?

Movie where people enter a church but find they can't leave, not in English

Why is the UK so keen to remove the "backstop" when their leadership seems to think that no border will be needed in Northern Ireland?

Can you cast bonus action and reaction spells while already casting a spell?

Removal of て in Japanese novels

Ghidra: Prepend memory segment in assembly listing view

What is the difference between "Grippe" and "Männergrippe"?

Why do banks “park” their money at the European Central Bank?

Talk interpreter

Evaluated vs. unevaluated Association

What are the occurences of total war in the Native Americans?



How to count response time max avg for nginx logs?


How to extract logs between two time stampsHow to get a response from any URL?Extracting time from logs using tcl, expectHow to read and do a min/max/avg from a NMEA streamHow to parse output for a specific response (creating Extension Attribute for JAMF JSS)Need to find a response time which takes from 1-3 seconds in Apache logsUser-defined function for finding max of 4 numbersHow to filter logs between a time range






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







1















I want count the response time max and average from nginx logs, based on hourly or minutes per api...



nginx.log sample:



10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2667 "-" "okhttp/3.12.0" "118.215.153.47" 0.178 0.178 .
10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2847 "-" "okhttp/3.12.0" "189.246.151.188" 0.177 0.178 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 401 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.007 0.007 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "202.80.217.172" 0.028 0.028 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/destination HTTP/1.1" 200 169 "-" "okhttp/3.12.0" "36.91.42.35" 0.019 0.019 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/inquiry HTTP/1.1" 200 503 "-" "okhttp/3.12.0" "36.89.234.129" 0.374 0.374 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/confirm HTTP/1.1" 200 874 "-" "okhttp/3.12.0" "36.89.234.129" 0.394 0.394 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "114.5.147.117" 0.024 0.024 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 403 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.003 0.003 .


expectation sample below:



date                       |  api                               | max| avg

25/Aug/2019:05:26 /v2/api/find/outlet 2847 2757
25/Aug/2019:05:27 /v2/api/menu/category HTTP/1.1 1847 1757
25/Aug/2019:05:28 /v2/api/menu/category HTTP/1.1 1147 1257


I already try with this awk but only got the average:



awk '/25/Aug/2019:18/ {c++} END{print c}' access.log


Thanks










share|improve this question



























  • did you have a look at goaccess.io ? It's a nice log parser and might be of use for you.

    – Bart
    8 hours ago


















1















I want count the response time max and average from nginx logs, based on hourly or minutes per api...



nginx.log sample:



10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2667 "-" "okhttp/3.12.0" "118.215.153.47" 0.178 0.178 .
10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2847 "-" "okhttp/3.12.0" "189.246.151.188" 0.177 0.178 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 401 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.007 0.007 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "202.80.217.172" 0.028 0.028 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/destination HTTP/1.1" 200 169 "-" "okhttp/3.12.0" "36.91.42.35" 0.019 0.019 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/inquiry HTTP/1.1" 200 503 "-" "okhttp/3.12.0" "36.89.234.129" 0.374 0.374 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/confirm HTTP/1.1" 200 874 "-" "okhttp/3.12.0" "36.89.234.129" 0.394 0.394 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "114.5.147.117" 0.024 0.024 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 403 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.003 0.003 .


expectation sample below:



date                       |  api                               | max| avg

25/Aug/2019:05:26 /v2/api/find/outlet 2847 2757
25/Aug/2019:05:27 /v2/api/menu/category HTTP/1.1 1847 1757
25/Aug/2019:05:28 /v2/api/menu/category HTTP/1.1 1147 1257


I already try with this awk but only got the average:



awk '/25/Aug/2019:18/ {c++} END{print c}' access.log


Thanks










share|improve this question



























  • did you have a look at goaccess.io ? It's a nice log parser and might be of use for you.

    – Bart
    8 hours ago














1












1








1








I want count the response time max and average from nginx logs, based on hourly or minutes per api...



nginx.log sample:



10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2667 "-" "okhttp/3.12.0" "118.215.153.47" 0.178 0.178 .
10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2847 "-" "okhttp/3.12.0" "189.246.151.188" 0.177 0.178 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 401 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.007 0.007 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "202.80.217.172" 0.028 0.028 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/destination HTTP/1.1" 200 169 "-" "okhttp/3.12.0" "36.91.42.35" 0.019 0.019 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/inquiry HTTP/1.1" 200 503 "-" "okhttp/3.12.0" "36.89.234.129" 0.374 0.374 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/confirm HTTP/1.1" 200 874 "-" "okhttp/3.12.0" "36.89.234.129" 0.394 0.394 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "114.5.147.117" 0.024 0.024 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 403 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.003 0.003 .


expectation sample below:



date                       |  api                               | max| avg

25/Aug/2019:05:26 /v2/api/find/outlet 2847 2757
25/Aug/2019:05:27 /v2/api/menu/category HTTP/1.1 1847 1757
25/Aug/2019:05:28 /v2/api/menu/category HTTP/1.1 1147 1257


I already try with this awk but only got the average:



awk '/25/Aug/2019:18/ {c++} END{print c}' access.log


Thanks










share|improve this question
















I want count the response time max and average from nginx logs, based on hourly or minutes per api...



nginx.log sample:



10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2667 "-" "okhttp/3.12.0" "118.215.153.47" 0.178 0.178 .
10.1.1.1 - - [25/Aug/2019:05:26:30 +0700] "POST /v2/api/find/outlet/ HTTP/1.1" 200 2847 "-" "okhttp/3.12.0" "189.246.151.188" 0.177 0.178 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 401 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.007 0.007 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "202.80.217.172" 0.028 0.028 .
10.1.1.1 - - [25/Aug/2019:05:27:52 +0700] "GET /v2/api/user/destination HTTP/1.1" 200 169 "-" "okhttp/3.12.0" "36.91.42.35" 0.019 0.019 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/inquiry HTTP/1.1" 200 503 "-" "okhttp/3.12.0" "36.89.234.129" 0.374 0.374 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "POST /v2/api/transaction/confirm HTTP/1.1" 200 874 "-" "okhttp/3.12.0" "36.89.234.129" 0.394 0.394 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/user/point HTTP/1.1" 200 152 "-" "okhttp/3.12.0" "114.5.147.117" 0.024 0.024 .
10.1.1.1 - - [25/Aug/2019:05:28:52 +0700] "GET /v2/api/menu/category HTTP/1.1" 403 40 "-" "okhttp/3.12.0" "139.194.84.246" 0.003 0.003 .


expectation sample below:



date                       |  api                               | max| avg

25/Aug/2019:05:26 /v2/api/find/outlet 2847 2757
25/Aug/2019:05:27 /v2/api/menu/category HTTP/1.1 1847 1757
25/Aug/2019:05:28 /v2/api/menu/category HTTP/1.1 1147 1257


I already try with this awk but only got the average:



awk '/25/Aug/2019:18/ {c++} END{print c}' access.log


Thanks







shell-script awk scripting






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 7 hours ago









fpmurphy

2,57612 silver badges16 bronze badges




2,57612 silver badges16 bronze badges










asked 10 hours ago









Fajar HadiyantoFajar Hadiyanto

192 bronze badges




192 bronze badges
















  • did you have a look at goaccess.io ? It's a nice log parser and might be of use for you.

    – Bart
    8 hours ago



















  • did you have a look at goaccess.io ? It's a nice log parser and might be of use for you.

    – Bart
    8 hours ago

















did you have a look at goaccess.io ? It's a nice log parser and might be of use for you.

– Bart
8 hours ago





did you have a look at goaccess.io ? It's a nice log parser and might be of use for you.

– Bart
8 hours ago










1 Answer
1






active

oldest

votes


















0















You really ought to use one of the many web server log file analysers (e.g. https://goaccess.io/ as suggested by @Bart. There's a decent summary of some alternatives at 7 Awesome Open Source Analytics Software For Linux and Unix, and google will find you more), but for a quick-and-dirty hack, you could use something like this:



awk -v OFS='t' '
$0 ~ date { max[$7]+=$(NF-1); count[$7]++ };
END {
print "date","api","count","max","avg";
for (i in max) {
print date, i, count[i], max[i], max[i]/count[i] }
}'
date="25/Aug/2019" nginx.log


Output based on your sample is (note, the fields are separated by tabs, not spaces):



date    api     count   max     avg
25/Aug/2019 /v2/api/find/outlet/ 2 0.356 0.178
25/Aug/2019 /v2/api/user/destination 1 0.019 0.019
25/Aug/2019 /v2/api/transaction/inquiry 1 0.374 0.374
25/Aug/2019 /v2/api/user/point 2 0.052 0.026
25/Aug/2019 /v2/api/transaction/confirm 1 0.394 0.394
25/Aug/2019 /v2/api/menu/category 2 0.01 0.005


BTW, the awk script above is based on the assumption that the response time for a given request is in the 2nd last field ($(NF-1)). I've had to guess here because you haven't told us what logfile format you have configured for your nginx server, or what the last few fields on each line are.






share|improve this answer




























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f537362%2fhow-to-count-response-time-max-avg-for-nginx-logs%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0















    You really ought to use one of the many web server log file analysers (e.g. https://goaccess.io/ as suggested by @Bart. There's a decent summary of some alternatives at 7 Awesome Open Source Analytics Software For Linux and Unix, and google will find you more), but for a quick-and-dirty hack, you could use something like this:



    awk -v OFS='t' '
    $0 ~ date { max[$7]+=$(NF-1); count[$7]++ };
    END {
    print "date","api","count","max","avg";
    for (i in max) {
    print date, i, count[i], max[i], max[i]/count[i] }
    }'
    date="25/Aug/2019" nginx.log


    Output based on your sample is (note, the fields are separated by tabs, not spaces):



    date    api     count   max     avg
    25/Aug/2019 /v2/api/find/outlet/ 2 0.356 0.178
    25/Aug/2019 /v2/api/user/destination 1 0.019 0.019
    25/Aug/2019 /v2/api/transaction/inquiry 1 0.374 0.374
    25/Aug/2019 /v2/api/user/point 2 0.052 0.026
    25/Aug/2019 /v2/api/transaction/confirm 1 0.394 0.394
    25/Aug/2019 /v2/api/menu/category 2 0.01 0.005


    BTW, the awk script above is based on the assumption that the response time for a given request is in the 2nd last field ($(NF-1)). I've had to guess here because you haven't told us what logfile format you have configured for your nginx server, or what the last few fields on each line are.






    share|improve this answer






























      0















      You really ought to use one of the many web server log file analysers (e.g. https://goaccess.io/ as suggested by @Bart. There's a decent summary of some alternatives at 7 Awesome Open Source Analytics Software For Linux and Unix, and google will find you more), but for a quick-and-dirty hack, you could use something like this:



      awk -v OFS='t' '
      $0 ~ date { max[$7]+=$(NF-1); count[$7]++ };
      END {
      print "date","api","count","max","avg";
      for (i in max) {
      print date, i, count[i], max[i], max[i]/count[i] }
      }'
      date="25/Aug/2019" nginx.log


      Output based on your sample is (note, the fields are separated by tabs, not spaces):



      date    api     count   max     avg
      25/Aug/2019 /v2/api/find/outlet/ 2 0.356 0.178
      25/Aug/2019 /v2/api/user/destination 1 0.019 0.019
      25/Aug/2019 /v2/api/transaction/inquiry 1 0.374 0.374
      25/Aug/2019 /v2/api/user/point 2 0.052 0.026
      25/Aug/2019 /v2/api/transaction/confirm 1 0.394 0.394
      25/Aug/2019 /v2/api/menu/category 2 0.01 0.005


      BTW, the awk script above is based on the assumption that the response time for a given request is in the 2nd last field ($(NF-1)). I've had to guess here because you haven't told us what logfile format you have configured for your nginx server, or what the last few fields on each line are.






      share|improve this answer




























        0














        0










        0









        You really ought to use one of the many web server log file analysers (e.g. https://goaccess.io/ as suggested by @Bart. There's a decent summary of some alternatives at 7 Awesome Open Source Analytics Software For Linux and Unix, and google will find you more), but for a quick-and-dirty hack, you could use something like this:



        awk -v OFS='t' '
        $0 ~ date { max[$7]+=$(NF-1); count[$7]++ };
        END {
        print "date","api","count","max","avg";
        for (i in max) {
        print date, i, count[i], max[i], max[i]/count[i] }
        }'
        date="25/Aug/2019" nginx.log


        Output based on your sample is (note, the fields are separated by tabs, not spaces):



        date    api     count   max     avg
        25/Aug/2019 /v2/api/find/outlet/ 2 0.356 0.178
        25/Aug/2019 /v2/api/user/destination 1 0.019 0.019
        25/Aug/2019 /v2/api/transaction/inquiry 1 0.374 0.374
        25/Aug/2019 /v2/api/user/point 2 0.052 0.026
        25/Aug/2019 /v2/api/transaction/confirm 1 0.394 0.394
        25/Aug/2019 /v2/api/menu/category 2 0.01 0.005


        BTW, the awk script above is based on the assumption that the response time for a given request is in the 2nd last field ($(NF-1)). I've had to guess here because you haven't told us what logfile format you have configured for your nginx server, or what the last few fields on each line are.






        share|improve this answer













        You really ought to use one of the many web server log file analysers (e.g. https://goaccess.io/ as suggested by @Bart. There's a decent summary of some alternatives at 7 Awesome Open Source Analytics Software For Linux and Unix, and google will find you more), but for a quick-and-dirty hack, you could use something like this:



        awk -v OFS='t' '
        $0 ~ date { max[$7]+=$(NF-1); count[$7]++ };
        END {
        print "date","api","count","max","avg";
        for (i in max) {
        print date, i, count[i], max[i], max[i]/count[i] }
        }'
        date="25/Aug/2019" nginx.log


        Output based on your sample is (note, the fields are separated by tabs, not spaces):



        date    api     count   max     avg
        25/Aug/2019 /v2/api/find/outlet/ 2 0.356 0.178
        25/Aug/2019 /v2/api/user/destination 1 0.019 0.019
        25/Aug/2019 /v2/api/transaction/inquiry 1 0.374 0.374
        25/Aug/2019 /v2/api/user/point 2 0.052 0.026
        25/Aug/2019 /v2/api/transaction/confirm 1 0.394 0.394
        25/Aug/2019 /v2/api/menu/category 2 0.01 0.005


        BTW, the awk script above is based on the assumption that the response time for a given request is in the 2nd last field ($(NF-1)). I've had to guess here because you haven't told us what logfile format you have configured for your nginx server, or what the last few fields on each line are.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 37 mins ago









        cascas

        41.8k4 gold badges59 silver badges111 bronze badges




        41.8k4 gold badges59 silver badges111 bronze badges

































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f537362%2fhow-to-count-response-time-max-avg-for-nginx-logs%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

            Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

            Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...