ON-DEMAND OR ON-PREMISE
Business owners and decision makers have options with how the manage and run their day to day business applications. With the choices come the assessment of hiring and maintaining IT staff versus the option of hosting their software in an on-demand environment. Interprise offers a product that works well in both. This article is being written to help future customers understand the moving pieces necessary to make the best decision for their company. I have assisted over 50 companies in analyzing which options are best and pitfalls to avoid. What I hope to convey are methods of proof and not allow speculation to override reality.
With that out of the way, let’s discuss the fundamental options we have. You can host it or someone else can host it. The factors used to decide should carry a weighted value. The two endpoints are identical as a client and a server are communicating and exchanging data. So how different can the options be?
OPTION 1: ON-PREMISE
On-premise servers can be accessed from anywhere with a stable internet connection. Most offices however do not have the redundancy or high bandwidth capabilities to correctly support high availability software access nor do most companies have the 24/7 support staff required to maintain such an environment. Culturally many companies are sensitive to the physical location of their data. Hosting the data yourself can give the comfort that your data is always within your control and are willing to accept all incurred charges that come with it.
- One mid-level server is necessary to host SQL Server Express (minimum requirement). Note that at the time of this writing there are 5 versions of SQL Server on the market. The prices range from free to many thousands of dollars. Data load, volume of data stored, and server capabilities usually decide the right version for you.
- IIS will be necessary if you are running web services and/or running one or more ISE websites. IIS is built into Windows server platforms.
- High speed LAN connection between the IIS and SQL Server (if you have more than one server in your infrastructure).
A Disaster Recovery process is in place for websites and databases. In planning discussions with customers I get asked how often full backups and incremental backups should be run. My answer is vague but spot on. It depends on how much data you can afford to lose. Doing good backups will cost you in hardware, software, and process expenses. It’s very much like an insurance policy except you are the one on the hook to restore if something goes wrong.
You usually will need a business grade internet connection (guarantee speed for data transfer both up/down) to support web services and websites (see ‘Connectivity Exploration Process’ below).
One or more people on staff or staff augmented, that understand your computers, your network, and your staffs computer needs.
- Connecting to a LAN is the fastest way for a client app and server to communicate data back and forth
- You will not need a cache database to hold much of the business logic and other offline, non-transactional information
- You own your data. Many customers feel uneasy when their data is hosted in a data center that they do not control and perceive their data is more secure
- Updates to the software are usually easier to deploy
- Creating environments for DEV/BETA are in your control (not necessary for many of the installations, but required for all implementations)
- You need a stable business level internet connection if you host an ISE website and/or run IS web services. If your internet goes down, so does the external exposure to your business.
- You must be in the office to connect to the data or use VPN/RDP/TS technologies to connect to the data server
- Less portability for sales people
- Server side operations such as disaster recovery processes, ongoing maintenance, and costs associated with having that expertise on staff or staff augmented
- Server hardware and software licensing costs are not trivial to many
OPTION 2: ON-DEMAND
On- demand services can be accessed from anywhere with a stable internet connection. The managed hosting provider will have power redundancy, round the clock staffing, high bandwidth abilities, and disaster recovery abilities.
- Most reliable uptime from both a server and Interprise presence. A good hosting company has redundant power, system cooling, redundant internet connections to various parts of the building, and staff 24 x 7 onsite that can be called in case of emergency. Their job is to keep your web presence and access to your database up and secure.
- All technical aspects are handled by staff trained to work with your server environment
- Servers are hosted in an environment that is specially built to host servers
- Licensing costs are spread out amongst the volume of customers on the server and built into the monthly hosting costs
- Disaster Recovery is part of your support package with the hosting company
- Some do not like the concept of their data being held in a different location
- You probably will not personally know the people you are working with
What else needs to be considered?
Simply speaking, if you are going to host your own servers and services the balance of this article may only serve as reference material. If you are considering a hosted version of Interprise, please read forward.
When Interprise Suite is hosted for you, minimally, the database will be connected to a web service and your computer will connect over the internet to the web service. The Microsoft Windows service call IIS (Internet Information Services) will accept the request from the client application.
Now for the reality check. Not all internet connections were created equal. Some have high speeds, but are not dependable due to packet loss ratios, some have slower speeds, but are very dependable. Many times I ask a customer “what type of internet connection do you have?” and the response mostly is “it’s great, I never have an issue browsing the web, getting emails, or watching my favorite online video”. By nature, the internet is stateless. There is no start and no end to a conversation, maybe a little like a walkie-talkie. You do not control from your handset if the radio waves will rich the otherreceiver. Let’s think of something that is not stateless, a phone call for instance. That connection is persistent until one or both sides hangs up. The phones have a dedicated channel between them and the infrastructure is such that there is dedicated bandwidth to support the communication. For the most part, this is not how the internet works. When a request for information is sent to a website or web service, we are not guaranteed the path it will take, how long it will take, or if it will need to be resent. There are exceptions to the rule with dedicated circuits, but for us common people, this is the reality.
Scenario #1: A customer called me and said they were having speed issues with Interprise. I asked how they were connecting up. They explained that they were connecting from their home to their office (on-premise installation). I asked what type of connection they had and was told a DSL at home and DSL at the office. I asked who their provider was, and they said a local ISP. I asked to run the CEP listed below. It returned 30+ hops and many of the hops were over 1000ms. I had him call his internet provider and ask about the hops. The internet provider said they are a reseller of internet service and would need to discuss it with their support people. In short, he changed providers to a major carrier and the issues went away.
Scenario #2: A customer called me and said all client machines are on the LAN, but Interprise keeps freezing and many times they can’t log in because IS couldn’t connect to the SQL Server. The LAN administrator said “it should be very fast, the server is sitting right next to the rest of the boxes”. Now I might be critical in saying this, but that is one of the craziest things I have ever heard. Physical location of a box does not matter, it’s all about the packet routing (see the CEP below). What we found out is that Interprise had nothing to do with the issue. Their DNS server, which is part of the smarts to routing IP traffic, was on a different segment of the WAN and packet routing was the core issue.
Scenario #3: A customer called me and said Interprise was having speed issues. I asked if it was web service based or LAN based. He was not sure. I had him look at the F2 configuration of the app and indeed he was connected in LAN mode. The performance of IS was horrible. I looked at the number of records they had for customers and it was like 20. They had 50 items. But the performance was still horrible. I asked to connect to the server and they said they needed to log into a different VPN for that server. It turns out he was using VPN to connect to the SQL Server database rather than use web services because he thought web services would be slower. This one to me was the gem of all gems. Think about this. VPN is a technology that encrypts and compresses your data, then sends it over the internet to a smart router that hooks you up with your server. And it is probably the same router that receives the request for the web services as well. The only difference is web services is programmed to perform over “slower than LAN” connections, direct SQL connections are not. So this customer had outsmarted himself on this one by using a protocol that was not optimized to work over VPN. Setting up a web service immediately brought better results. There was nothing done intentionally, but I was glad they called looking for opportunity to improve performance.
THE “CONNECTION EXPLORATION PROCESS” = CEP
I just made this acronym up, but I hope it sticks with everyone that reads this. :)
As an existing or future customer of Interprise, your interest should not only be in functionality, but speed is critical in a hosted scenario. We collectively do not control the Internet; we subscribe with our ISP (which are many times levels deep) and expect great service. By nature, it is much less predictable than LAN and therefore we should not expect the same level of performance. Our server can be amazing, but if the connectivity is less than par or unpredictable, the application performance will be less than expected.
I ALWAYS ask that a potential customer of Interprise looking to use IS over web services to run the TRACERT from the command prompt. (Depending on version of windows, this might look slightly different). The results are only an indicator, not the end-all be-all. But if you are a prospect that lives in the outer regions of Canada or New Guinea, and want to connect to a data server in LA, you really should run a few simple tests before you commit to an approach.
C:> TRACERT calpop.com (a cheap internet provider on the west coast)
Our level of commitment and dependency to the internet is high. We host with very solid companies with dedicated staff making sure the servers and their connection to the internet and each other are optimized. The information we get back will tell you some basic information about the route your connection will take to connect to our server. You can run this directly against an IP as well. In this case, let's use a cheap hosting provider on the west coast for our test.
Notes about this result and how to read it.
The top line is what you type in the command prompt (Start > Run > CMD)
TRACERT will only return up to 30 hops. In a perfect world, you would see 15 hops or less to the server/IP you are trying to connect to. Anything after this… buyers beware. This is NOT a limitation of our application but rather a reality of the internet. Each requested packet of TCP/IP information to a hosting provider (FTP request, email request, SOAP/web service request, HTTP/webpage request, etc…) may not come back to you in the same path you took to get there. So when the PC/router(s) tries to put the transferred information back together, if there are missing packets (due to timed out hops) your router may request another packet… On a local server, things are around 1-3 MS. On Wireless LAN, routes are under 20 MS, after that it gets unpredictable.
This ping was from my house over a residential cable modem, not a preferred business line. All added up, my response was around 775 ms, or ¾ of a second. That is pretty good for a home based connection. I have a total of 17 hops, which alone isn’t great. The good news in my case is the hops all average under 100 MS per hop. This is very good. When you see hops (specifically in order) that are 250, 450, 780, 210… there will be trouble and our app will perform poorly in this environment.
Notice the first block of four returns that I have same/similar starting IP’s. To me, the numbers are low and it’s a free pass. Those are probably all in the same building. Then it jumps out to Chicago, then Denver, then San Jose, then LA. In a perfect world, I would like to take two of those out, but as stated, we don’t get to choose the path. We just consume what’s out there. Once the trace reaches CALPOP, I don’t count the hops after that.
NOTE: Before I get a lot of emails talking about working with ISP’s to get dedicated point to point router programs in place, I realize there is the concept of dedicated paths. I am just talking in generalities here that most people will be dealing with.
My larger point here is set expectations before you decide. If you are a person up in the mountains and you use a choppy wireless connection, we will want to look at a RDA/Citrix remoting solution. This does add more cost to the hosting, but substantially increases success and speed of delivering the application.
A very strong indicator of your connection to our hosted server can be found by using the free tool at https://www.speedtest.net/
For a point of reference, choose the STAR I have highlighted as this is VERY CLOSE to our hosting location. Run the test and review the results.
And the results are in...
- My PING result is 73 ms, it’s under 100, so far I am OK with that.
- My download speed is almost 7 MB/s which is adequate. We don’t have things in our app that require extremely large downloads on a consistent basis. This may be a factor however when there are new plug-ins added server side or when the initial cache db is created.
- My upload speed is just as or more important than the download speed. People are always adding data to their ERP. Commercial hosting contracts usually have matched upload/download speeds, residential connections by design do not. Residents browse the web and download music and pictures. They don’t need the upload speed. If you are hosting from a server, I would say dedicated 1 Mb/sec is the MINIMUM you could get by with. Ask your providers to allow you to prove out different rates of transfer while measuring the transfer data. This is the only true way to know your weak point won’t be your connection.
You take these two core tests, which take less than a minute to run, and we know whether we should be talking hosted or local installations. I would be happy to review these indicators with you the first few times you run them.
I also would like to hear back from anyone on the different tools you have successfully used, such as VisualRoute 2010 or other software that gives a clear picture of connectivity and analysis.
VP of Technical Services
Interprise Software Solutions, Inc.