performance tuning and guide and for linux
understanding linux cpu utilization
important! note you know the number of CPU X Cores in your server cat /proc/cpuinfo
|
Linux monitoring system with sar IO, network, CPU, memory
profiling in production
Identifying which Java Thread is consuming most CPU
I didn’t come up with this. I was shown how to do this by an esteemed college at work.
Introduction
Most (if not all) productive systems doing anything important will use more than 1 java thread. And when something goes crazy and your cpu usage is on 100%, it is hard to identify which thread(s) is/are causing this. Or so I thought. Until someone smarter than me showed me how it can be done. And here I will show you how to do it and you too can amaze your family and friends with your geek skillz.
monitoring in production (and other envs...)
example http://demo.javamelody.cloudbees.net/monitoring (see threads section)
* run top
* press Shift-H to enable Threads View
* get PID of the thread with highest CPU
* convert PID to HEX
* get stack dump of java process
* in stack dump look for thread with the matching HEX PID.
This might be a little old, but here's what I did to kinda merge top and jstack together. I used two scripts, but I'm sure it all could be done in one.
First, I save the output of top with the pids for my java threads into a file and save the jstack output into another file:
Then I use a perl script to call the bash script (called cpu-java.sh here) and kinda merge the two files (/tmp/top.log and /tmp/jstack.log):
The output helps me to find out which threads are hogging my cpu:
Then I can go back to /tmp/jstack.log and take a look at the stack trace for the problematic thread and try to figure out what's going on from there. Of course this solution is platform-dependent, but it should work with most flavors of *nix and some tweaking here and there.
|
Measure network bytes in out
sar -n DEV 1 100 |
Count number of TCP connection close (FIN)
tcpdump 'tcp[13] & 1!=0' -w result.txt wc -l results.txt |
Method for finding high cpu methods without profiler simulate hprof
Ideally you can use a profiler to track down the problem. If you can't use a profiler in the environment it's in (like production) then try to reproduce it somewhere else and attach a profiler there. Often that can be difficult though, so here's a trick I've used a number of times to find the cause of CPU utilization in production from the command line:
If you watch this for a short period of time you might see some common methods being called. Next grab a full jstack output to a file:
Now search the file for one of the methods that you saw showing up frequently. From its stack trace you can usually find the code responsible for grinding the CPU. If nothing shows up then perhaps your time is going to excessive garbage collection and you simply need to increase memory. You can use:
|
Tomcat tuning external resources
see tomcat tuning book
see java profiling and tuning book.
|
Comments