- Apache Http Server And Tomcat
- Apache Http Server Tomcat Difference
- Apache Tomcat Server Cost
- Tomcat And Apache Http Server
- Tomcat Vs Apache Http Server
Those who want to use Java Server Page (JSP) or Java Servlet, first they have to install Apache Tomcat Server. After installation of Apache Tomcat you have to download Eclipse IDE.
How to Check the Status of the Apache Tomcat Server (Windows) Monitoring the JVM is an important part of administering the Apache Tomcat server. The Tomcat manager provides a quick way to check our server's status by displaying how many HTTP threads are active, the storage allocation in the various memory pools plus other helpful data. The Apache HTTP Server ('httpd') was launched in 1995 and it has been the most popular web server on the Internet since April 1996. It has celebrated its 25th birthday as a project in February 2020. The Apache HTTP Server is a project of The Apache Software Foundation. Learn how to use Apache Tomcat server for hosting Java web applications. How to download, install, configure the server and deploy Java web applications into.
With this example, it is assumed that both your backend application and the Apache Web Server are running on the same host. Tomcat 8.5.5 SpringBoot (2.0.3) with Angular 6.x SockJS for Websockets with Angular Apache Http web server (2.4.41) OS: macOS 10.15.x & Windows 10. Apache Web Server Installation. Tomcat is normally defined as a reference implementation of the Java Servlet and the Java Server Page (JSP) Specifications. It basically executes Java servlets and renders web pages which include JSP coding. It is available on the Apache site in both source and binary versions.
This article will help you to setup of Apache Tomcat Server with Eclipse IDE. For this you have this software listed below:
- Apache Tomcat Server
- Eclipse IDE
You can easily download this software from the given link this will save your time:
When your download is completed. First you have to install JDK on your system. Then you have to install Eclipse IDE on your system. Then install Apache Tomcat Server on your system. Apache Tomcat Server is open source web server and java servlet. There are some steps for Setup Apache Tomcat Server with Eclipse IDE. You have to follow given steps:
- Open Eclipse IDE.
- Open Window -> Preferences -> Server -> Install Runtimes to create server runtime.
- Then click on Add button to open the New Server Runtime window.
- Then select your Apache Tomcat Server version that is under the Apache Folder. (For example I select Apache Tomcat 6.0 version)
- Then click on Next Button.
- Then you have to fill the Tomcat installation directory.
- On configuring Apache Tomcat Installation window you have to browse the Tomcat installation directory. When you install Apache Tomcat on your system by default Tomcat6.0 folder is created under Apache Software Foundation. You have to browse that folder.
- Click on finish button.
- After Configured Apache Tomcat Server will be displayed in the Servers view.
- Now you have to start the Apache Tomcat Server.
- To start Apache Tomcat Server you have to manage the servers.
- Right click on servers and click on Start.
- This will start your Apache Tomcat Server.
- Now you are successfully configured Apache Tomcat Server with Eclipse IDE.
- After setup Apache Tomcat Server with Eclipse IDE. You have to test that connection is it working properly or not.
- For that test you have to open the browser (for example Google Chrome).
- Now you have to type “http://localhost:8080”.
- Then you should see the Apache Tomcat home page.
- That page will show you a message that is “you have setup Apache Tomcat Server successfully, congratulation” that means you are successful.
Hopefully this information is helpful to you.
Disclosure: As an Amazon Associate, I earn from qualifying purchases. The commission help keep the rest of my content free, so thank you!
Running cluster of Tomcat servers behind the Web server can be demandingtask if you wish to archive maximum performance and stability.This article describes best practices how to accomplish that.
By Mladen Turk
One might ask a question Why to put the Web server in front of Tomcatat all? Thanks to the latest advances in Java Virtual Machines (JVM)technology and the Tomcat core itself, the Tomcat standalone is quitecomparable with performance to the native web servers.Even when delivering static content it is only 10%slower than recent Apache 2 web servers.
The answer is: scalability.
Tomcat can serve many concurrent users by assigning a separate thread ofexecution to each concurrent client connection. It can do that nicely butthere is a problem when the number of those concurrent connections rise.The time the Operating System will spend on managing those threads will degradethe overall performance. JVM will spend more time managing and switching thosethreads then doing a real job, serving the requests.
Besides the connectivity there is one more significant problem, and it causedby the applications running on the Tomcat. A typical application will processclient data, access the database, do some calculations and present the databack to the client. All that can be a time consuming job that in most casesmust be finished inside half a second, to achieve user perception of a workingapplication. Simple math will show that for a 10ms application response time youwill be able to serve at most 50 concurrent users, before your users startcomplaining. So what to do if you need to support more users?The simplest thing is to buy a faster hardware, add more CPU or add more boxes.A two 2-way boxes are usually cheaper then a 4-way one, so adding more boxesis generally a cheaper solution then buying a mainframe.
First thing to ease the load from the Tomcat is to use the Web serverfor serving static content like images, etc.
|Figure 1. Generic configuration|
Figure 1. shows the simplest possible configuration scenario. Here theWeb server is used to deliver static context while Tomcat only does thereal job - serving application. In most cases this is all that you will need.With 4-way box and 10ms application time you'll be capable of serving 200concurrent users, thus giving 3.5 million hits per day, that is by allmeans a respectable number.
For that kind of load you generally do not need the Web server in front ofTomcat. But here comes the second reason why to put the Web server in front, andthat is creating an DMZ (demilitarized zone). Putting Web server on acomputer host inserted as a 'neutral zone' between a company's private networkand the internet or some other outside public network gives the applicationshosted on Tomcat capability to access company private data, while securingthe access to other private resources.
|Figure 2. Secure generic configuration|
Beside having DMZ and secure access to a private network there canbe many other factors like the need for the custom authentication for example.If you need to handle more load you will eventually have to add more Tomcatapplication servers. The reason for that can be either caused by the factthat your client load just can not be handled by a single box or that youneed some sort of failover in case one of the nodes breaks.
|Figure 3. Load balancing configuration|
Configuration containing multiple Tomcat application servers needs a load balancerbetween web server and Tomcat. For Apache 1.3, Apache 2.0 and IIS Web serversyou can use Jakarta Tomcat Connector (also known as JK), because it offersboth software load balancing and sticky sessions. For the upcoming Apache 2.1/2.2use the advanced mod_proxy_balancer that is a new module designed and integratedwithin the Apache httpd core.
When determining the number of Tomcat servers that you will need to satisfythe client load, the first and major task is determining the Average ApplicationResponse Time (hereafter AART). As said before, to satisfy the user experiencethe application has to respond within half of second. The content received by the clientbrowser usually triggers couple of physical requests to the Web server (e.g. Sophos xg firewall v18. images). Theweb page usually consists of html and image data, so client issues a seriesof requests, and the time that all this gets processed and delivered iscalled AART. To get most out of Tomcat you should limit the number of concurrentrequests to 200 per CPU.
So we can come with the simple formula to calculate the maximumnumber of concurrent connections a physical box can handle:
The other thing that you must care is the Network throughput between theWeb server and Tomcat instances. This introduces a new variable calledAverage Application Response Size (hereafter AARS), that is the number ofbytes of all context on a web page presented to the user. On a standard100Mbps network card with 8 Bits per Byte, the maximum theoreticalthroughput is 12.5 MBytes.
For a 20KB AARS this will give a theoretical maximum of 625 concurrentrequests. You can add more cards or use faster 1Gbps hardware if needto handle more load.
The formulas above will give you rudimentary estimation of the number ofTomcat boxes and CPU's that you will need to handle the desirednumber of concurrent client requests.If you have to deploy the configuration withouthaving actual hardware, the closest you can get is to measure the AART ona test platform and then compare the hardware vendor Specmarks.
Fronting Tomcat with Apache
If you need to put the Apache in front of Tomcat use the Apache2 withworker MPM. You can use Apache1.3 or Apache2 with prefork MPM for handlingsimple configurations like shown on the Figure 1. If you need to frontseveral Tomcat boxes and implement load balancing use Apache2 and workerMPM compiled in.
MPM or Multi-Processing Module is Apache2 core feature and it is responsiblefor binding to network ports on the machine, accepting requests,and dispatching children to handle the requests.MPMs must be chosen during configuration, and compiled into the server.Compilers are capable of optimizing a lot of functions if threads are used,but only if they know that threads are being used. Because some MPMs use threadson Unix and others don't, Apache will always perform better if the MPM ischosen at configuration time and built into Apache.
Worker MPM offers a higher scalability compared to a standard preforkmechanism where each client connection creates a separate Apache process.It combines the best from two worlds, having a set of child processes eachhaving a set of separate threads. There are sites that are running10K+ concurrent connections using this technology.
Connecting to Tomcat
In a simplest scenario when you need to connect to single Tomcat instanceyou can use mod_proxy that comes as a part of every Apache distribution.However, using the mod_jk connector will provide approximately double the performance.There are several reasons for that and the major is that mod_jk manages apersistent connection pool to the Tomcat, thus avoiding opening and closingconnections to Tomcat for each request. The other reason is that mod_jk uses a customprotocol named AJP an by that avoids assembling and disassembling headerparameters for each request that are already processed on the Web server.You can find more details about AJPprotocol on the Jakarta Tomcat connectors site.
For those reasons you can use mod_proxy only for the low load sitesor for the testing purposes. From now on I'll focus on mod_jk for frontingTomcat with Apache, because it offers better performance and scalability.
One of the major design parameters when fronting Tomcat with Apacheor any other Web server is to synchronize the maximum number of concurrentconnections. Developers often leave default configuration values from both Apache andTomcat, and are faced with spurious error messages in theirlog files. The reason for that is very simple. Tomcat and Apache can each accept onlya predefined number of connections. If thosetwo configuration parameters differs, usually with Tomcat havinglower configured number of connections, you will be faced with thesporadic connection errors. If the load gets even higher, your users willstart receiving HTTP 500 server errors even if your hardware is capableof dealing with the load.
Determining the number of maximum of connections to the Tomcatin case of Apache web server depends on the MPM used.
On the Tomcat side the configuration parameter that limits the numberof allowed concurrent requests is maxProcessors with default value of20. This number needs to be equal to the MPM configuration parameter.
Load balancing is one of the ways to increase the number of concurrentclient connections to the application server. There are two types ofload balancers that you can use. The first one is hardware load balancerand the second one is software load balancer. If you are using load balancinghardware, instead of a mod_jk or proxy, it must support a compatible passiveor active cookie persistence mechanism, and SSL persistence.
Mod_jk has an integrated virtual load balancer worker that can containany number of physical workers or particular physical nodes.Each of the nodes can have its own balance factor or the worker'squota or lbfactor. Lbfactor is how much we expect this workerto work, or the workers's work quota.This parameter is usually dependent on the hardware topology itself, andit offers to create a cluster with different hardware node configurations.Each lbfactor is compared to all other lbfactors in the cluster and itsrelationship gives the actual load. If the lbfactors are equal the workersload will be equal as well (e.g. 1-1, 2-2, 50-50, etc..). If firstnode has lbfactor 2 while second has lbfactor 1, than the first nodewill receive two times more requests than second one.This asymmetric load configuration enables to have nodes with differenthardware architecture.
In the simplest load balancer topology with only two nodes in thecluster, the number of concurrent connections on a web server sidecan be as twice as high then on a particular node. But ..
The upper statement means that the sum of allowed connections on aparticular nodes does not give the total number of connections allowed.This means that each node has to allow a slightly higher number ofconnections than the desired total sum. This number is usually a20% higher and it means that
So if you wish to have a 100 concurrent connections with two nodes,each of the node will have to handle the maximum of 60 connections.The 20% margin factor is experimental, and depends on the Apacheserver used. For prefork MPMs it can rise up to 50%, while forthe NT or Netware its value is 0%. The reason for that is thateach particular child process menages its own balance statisticsthus giving this 20% error for multiple child process web servers.
The minimum configuration for a three node cluster shown in theupper example will give the 25%-50%-25% distribution of the load,meaning that the node2 will get as much load as the rest of the two members.It will also impose the following number of maxProcessors for each particularnode in case of the MaxClients=200.
Using simple math the load should be 50-100-50 but we needed to add the20% load distribution error. In case this 20% additional load is not sufficient,you will need to set the higher value up to the 50%. Of course the averagenumber of connections for each particular node will still follow theload balancer distribution quota.
Sticky sessions and failower
One of the major problems with having multiple backendapplication servers is determining the client-server relationship.Once the client makes a request to a server application thatneeds to track user actions over a designated time period,some sort of state has to be enforced inside a stateless httpprotocol. Tomcat issues a session identifier thatuniquely distinguishes each user. The problem with that sessionidentifier is that he does not carry any information about theparticular Tomcat instance that issued that identifier.
Tomcat in that case adds an extra jvmRoute configurablemark to that session. The jvmRoute can be any name that willuniquely identify the particular Tomcat instance in the cluster.On the other side of the wire the mod_jk will use that jvmRouteas the name of the worker in it's load balancer list. This meansthat the name of the worker and the jvmRoute must be equal.
When having multiple nodes in a cluster you can improve your applicationavailability by implementing failover. The failover means that if theparticular elected node can not fulfill the request the another nodewill be selected automatically. In case of three nodes you are actually doubling yourapplication availability. The application responsetime will be slower during failover, but noneof your users will be rejected. Inside the mod_jk configuration thereis a special configuration parameter called worker.retries that has default value of 3, butthat needs to be adjusted to the actual number of nodes in the cluster.
If you add more then three workers to the load balanceradjust the retries parameter to reflect that number.It will ensure that even in the worse case scenario the requestgets served if there is a single operable node. Of course, therequest will be rejected if there are no free connections available on theTomcat side , so you should increase the allowed number of connectionson each Tomcat instance. In the three node scenario (1-2-1)if one of the nodes goes down, the othertwo will have to take its load. So if the load is divided equally you will needto set the following Tomcat configuration:
This configuration will ensure that 200 concurrent connections willalways be allowable no matter which of the nodes goes down. The reason fordoubling the number of processors on node1 and node3 is because theyneed to handle the additional load in case node2 goes down (load 1-1).Node2 also needs the adjustment becauseif one of the other two nodes goes down, the load will be 1-2. As youcan see the 20% load error is always calculated in.
|Figure 4. Three node example load balancer|
|Figure 5. Failover for node2|
As shown in the two figures above setting maxProcessors depends bothon 20% load balancer error and expected single node failure. Thecalculation must include the node with the highest lbfactor asthe worst case scenario.
Domain Clustering model
Since JK version 1.2.8 there is a new domain clustering model andit offers horizontal scalability and performance of tomcat cluster.
Tomcat cluster does only allow session replication to all nodes in the cluster.Once you work with more than 3-4 nodes there is too much overhead and risk inreplicating sessions to all nodes. We split all nodes into clustered groups.The newly introduced worker attribute domain letmod_jk know, to which other nodes a session gets replicated (all workers withthe same value in the domain attribute). So a load balancing worker knows, onwhich nodes the session is alive. If a node fails or is being taken downadministratively, mod_jk chooses another node that has a replica of the session.
For example if you have a cluster with four nodes you can maketwo virtual domains and replicate the sessions only inside the domains.This will lower the replication network traffic by half
|Figure 6. Domain model clustering|
For the above example the configuration would look like:
Now assume you have multiple Apaches and Tomcats. The Tomcats are clustered andmod_jk uses sticky sessions. Now you are going to shut down (maintenance) onetomcat. All Apache will start connections to all tomcats. You end up with alltomcats getting connections from all apache processes, so the number of threadsneeded inside the tomcats will explode.If you group the tomcats to domain as explained above, the connections normallywill stay inside the domain and you will need much less threads.
Fronting Tomcat with IIS
Just like Apache Web server for Windows, Microsoft IIS maintainsa separate child process and thread pool for serving concurrent clientconnections. For non server products like Windows 2000 Professional orWindows XP the number of concurrent connections is limited to 10.This mean that you can not use workstation products for productionservers unless the 10 connections limit will fulfil your needs.The server range of products does not impose that 10 connectionlimit, but just like Apache, the 2000 connections is a limit whenthe thread context switching will take its share and slow down theeffective number of concurrent connections.If you need higher load you will need to deploy additional web serversand use Windows Network Load Balancer (WNLB) in front of Tomcat servers.
|Figure 7. WNLB High load configuration|
For topologies using Windows Network Load Balancer the same rules are in placeas for the Apache with worker MPM. This means that each Tomcat instancewill have to handle 20% higher connection load per node than its real lbfactor.The workers.properties configuration must beidentical on each node that constitutes WNLB, meaning that you will have toconfigure all four Tomcat nodes.
Apache 2.2 and new mod_proxy
For the new Apache 2.1/2.2 mod_proxy has been rewriten and hasa new AJP capable protocol module (mod_proxy_ajp) and integratedsoftware load balancer (mod_proxy_balancer).
Because it can maintain a constant connection pool to backedservers it can replace the mod_jk functionality.
Apache Http Server And Tomcat
The above example shows how easy is to configure a Tomcat cluster withproxy loadbalancer. One of the major advantages of using proxy is theintegrated caching, and no need to compile external module.
Mod_proxy_balancer has integrated manager for dynamic parameter changes.It offers changing session routes or disabling a node for maintenance.
Apache Http Server Tomcat Difference
|Figure 8. Changing BalancerMember parameters|
Apache Tomcat Server Cost
The future development of mod_proxy will include the option todynamically discover the particular node topology. It will also allowto dynamically update loadfactors and session routes.
About the Author
Tomcat And Apache Http Server
Mladen Turk is a Developer and Consultant for JBoss Inc in Europe, where he isresponsible for native integration. He is a long time commiter for Jakarta Tomcat Connectors,Apache Httpd and Apache Portable Runtime projects.
Links and Resources
Tomcat Vs Apache Http Server
Jakarta Tomcat connectors documentation
Apache 2.0 documentation
Apache 2.1 documentation