'Hibernate - C3P0 JDBC connection pooling
I'm not a Java developer, but a systems administrator, so forgive me for any senseless statement or questions.
I suspect the C3P0 connection pooling is not set correctly in Hibernate, causing lots of Oracle DB connections (oracle<instanceID> (LOCAL-NO)
) via various Java apps. Some of those connections stay on for 30 days (probably stale) until they get auto-closed or discarded. Those connections go into "sleep (S)" state and the process stack shows them at "sk_wait_data" stage. Most of them do seem to switch trying to look for data and go back to "sk_wait_data". The switching happens every few seconds. The switch causes the process to get on the CPU run queue causing increase in the system load value. I believe this is a great deal of critical system resources issue.
As per JMC, the min pool size is 1 and max varies depends on the type of the application. I suspect min=1
is dead wrong and the max should be set up properly based on the app traffic sustainability and some buffer for scalability. The usage of the C3P0 pool seems erratic and in-efficient in JMC.
So the switching of the connection states is bad and needs to be fixed. Besides seeking comments from experts on above, I've a question about the DB connection:
Would a JDBC connection move back and forth "sk_wait_data" like above or is it wrong?
I would think the min size pool connections would always be connected to the DB but don't know what their socket connection state would be?
Also on a VM having 2 Java apps, they seem to create like 10 pools. Not sure if that is right either.
Please advise.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|