5 Reasons to Set Up Business Website Monitoring

Today’s guest blog post is from Davis J Martin.

As a business owner, you want your website to be up and running; but you may not have considered using a 3rd party monitoring service. If that’s you, keep reading. I know, it’s difficult to keep an eye on our website 24×7, 365 days a year.

You may need someone to keep a consistent watch on the uptime of your website. Hiring a monitoring service company will help you track your website’s performance like speed, availability (24/7), functionality, response time and performance.

If there is any downtime issue found in your website, the monitoring service will immediately send you SMS, email or RSS feed. Based on the reports sent, you can find the root cause of the issue and can fix it as soon as possible as to avoid any serious damage to your online reputation.

Here are five reasons why you should consider hiring a monitoring services to monitor your business website.

  1. Retains customers: When customers are not able to access your site, they leave your site and move to your competitor’s site. But if your site is continuously monitored, you will know the moment your site goes down and you can fix it immediately. In this way, you are likely to retain customers and keep them coming back to your site.
  2. Good traffic: When your site is up and running, it helps your visitors to stay on your site and find the information they are looking for. The information can be related to either research or purchase of your products/services. The availability (or the uptime) is one of the main reasons for your website to get good traffic.
  3. Prevents revenue loss: Every minute of website downtime equates to loss of customers/profits. A website monitoring service will immediately notify you about the downtime through SMS or email, so that you can fix the problem as quickly as possible. Fixing the problem immediately will not only help you save time and money, but can significantly increase profits as well.
  4. Improves search engine ranking: If your website is down for a long time and you are not aware of the issue (as said earlier, it is not possible to keep an eye on a website 24×7, 365 days a year), then search engine crawlers may de-index your site from search engine results. But if your site was monitored by a web monitoring service, then you wouldn’t have faced this issue, because the monitoring service will alert you immediately when your site is down and you can resolve the problem quickly. By keeping your website up, you are likely to improve your website’s search engine ranking.
  5. Cost effective: If you think hiring a website monitoring services is expensive, you are mistaken, because they are not very expensive. However, the benefits are great. To cite a real example, in just a matter of few minutes, amazon.com lost million dollars of revenue, because the site was down.

Website monitoring services keep monitoring your business website and let you know the moment the site goes down. It will also show up your website’s status, whether it is up or down. It helps your business stay ahead in today’s online world.

Author Profile:

Davis J Martin is an experienced technical consultant interested in content writing. With his expertise in the field, he has been writing informative articles on a variety of topics related to websites and monitoring. He is employed by Alertra, a leading provider of website monitoring and alerting services. Alertra offers website monitoring services to all sizes of businesses right from small businesses to Fortune 500 companies.

Posted in Business Continuity | Tagged , , , , , | Leave a comment

I recently found this slideshare about how to run a post-mortem here: Post Mortems for Humans

I am personally a firm believer that if you want a reliable, problem-resistant system, you have to incrementally improve by asking questions and systematically eradicating problems as they arise. Designing a system in a vacuum to be perfect simply does not work in practice because the subtly and variety of environmental issues are unknown at design time. Really, you have to design in a facility for not having a design for every problem.

The slideshow also talks about humor and using past mistakes to diffuse the tension of a post-mortem. This resonates with me as well, as I try to use humor to break through barriers all the time (probably not apparent from the writing on this blog however…).

Labeling a root cause as “human error” is basically worthless. This is well articulated in the slideshow. In fact, using root causes at all is a fool’s errand. I recently have had this debate with co-workers as we tried to decide which of the several reasons something broke was the One True Root Cause. Who cares? The more important issue is how do we prevent the broadest range of possible future problems. Too many times I have seen people get down to a single root cause and then fix that one exact problem just to have a similar (but new!) issue pop up the next day.

What do you think?

Link | Posted on by | Tagged , , , , , , | Leave a comment

5 Lessons from Healthcare.gov Software Disaster of the Century

Healthcare.gov

Healthcare.gov (Photo credit: jurno)

The most visible software disaster I’ve ever seen is the Healthcare.gov problems that have received unprecedented media coverage and raised concerns both technical and political.

There is plenty of coverage of the saga so I won’t bother to repost or comment on all that is happening.  But instead will attempt to extract some lessons that can be applied to software projects in general.

1. Too Many Cooks

Continue reading

Posted in Business Continuity, Disaster Recovery | Tagged , , , , , , | 2 Comments

Solve Human Error Disclosures

English: ABLOY keys Русский: Ключи дискового з...

English: ABLOY keys Русский: Ключи дискового замка ABLOY (Photo credit: Wikipedia)

It doesn’t matter how good your technology systems are if you trust people to follow certain steps to keep data secure as a prison in England learned the hard way.

The best part of this story is that they “were reminded how to handle personal and sensitive information of patients and employees.”  Unfortunately reminding people simply doesn’t work if you want to really make a change.

So what should they have done?

First of all, question the need for USB sticks in the first place.  Why can’t the data be stored securely in the cloud and transferred on an encrypted channel?

And the data-at-rest on the USB keys could be encrypted with public/private keys.  If the USB keys are lost, they would be of no use to anyone who found them.

When you run an organization of any size that requires the protection & care of any personal data, you have to assume that people will mess up.  Empower them to do the right thing, give them the right tools, and make sure you have failsafe systems that prevent risky & costly disclosures.

Posted in Cloud, Security, Technology | Tagged , , , , , , , , , , | Leave a comment

Proactive Server Monitoring Pitfalls

Svet Stefanov

Svet Stefanov

The following is a guest post by Svet Stefanov.  He is a freelance writer and
a contributing author to WebSitePulse’s blog. Svet is currently exploring the topic of proactive
monitoring, and how it can help small and big business steer clear of shallow waters.

The first step in fixing a problem is admitting you have one.  Fixing problems in software systems is no different. Server issues are not a thing of the past and as the Cloud continues to grow and further meshes into our lives, these problems are not going away anytime soon. Monitoring different types of servers, SaaS, or even a single website is not only a good practice, but a long-term investment. If you have the time, I would like share my thoughts on why it is better to be prepared for the worst rather than blissfully avoiding it.

I’ve heard people say that monitoring is the first line of defense. In my opinion, the first line of defense is a great recovery procedure and embedded redundancy. Adequate monitoring systems have two main functions – to detect a problem and to alert concerned parties. More advanced systems with extensions & complex configuration can also take action without human intervention based on a predefined set of rules.  But even a few small investments in monitoring can yield great improvements in reliability.

Internal or external monitoring? Continue reading

Posted in Business Continuity, Cloud, Disaster Recovery, Downtime, Technology, Uptime | Tagged , , , , | Leave a comment

Software You Hope You Never Use

Photo by miguelb

I was reminded recently of a project I worked on several years ago for a local university.  This software was a simple system to be used in emergency situations to help account for students who may have been affected and get them connected to medical services or parents as necessary.

Usually when I write software, I get excited by the idea of people using it and being able to enjoy it or at least have it solve a problem for them.  In this case, you hope that no one will ever have to use the software.

As I think about preparedness for an emergency and creating software to be used only in extreme and potentially catastrophic circumstances, it creates some unique design challenges.

Prediction

Continue reading

Posted in Uncategorized | 1 Comment

What You Wish You Knew During a Crisis…

From my guest post at ContinuityInsights.com

Continuity InsightsDuring a crisis, there is almost by definition a shortage of accessible information. Because of the time pressure a disaster creates, anything considered noise gets filtered out and ignored. However, if you could create a plan to track the right informatoin and make it available during difficult times, it could mean the difference between tragedy and a close call.

Continue reading (at ContinuityInsights.com)…

Posted in Business Continuity, Cloud, Disaster Recovery, Downtime, Technology, Uptime | Tagged , , , , | Leave a comment

Thoughts on Windows Azure Leap Day Downtime

I’d be remiss not to mention the Windows Azure Downtime on Leap Day.  Because of my employment at Microsoft I won’t speculate or say too much on the situation.   I have said before that cloud computing does not completely alleviate the risks of downtime.

I would like to reiterate that there are always inherent risks in building and running software, and failure is to be expected not avoided.  The best designed systems are set up for failure, and can handle these cases with grace.  This particular event with Windows Azure further highlights the need to design applications that sit on top of any infrastructure (traditional, cloud, or hybrid) in such a way that they can work when (not if) a major portion of the infrastructure fails.

Don’t be fooled into thinking that any cloud service provides a silver bullet to resiliency.  Outsourcing your IT infrastructure to a cloud provider greatly improves your resiliency to for the cost you have to pay; most of us cannot afford to build & maintain a fault tolerant world-wide infrastructure.   When a failure does occurs, don’t overlook the economies of scale that benefit the application tenants most of the time when things are working properly.

Posted in Business Continuity, Cloud, Disaster Recovery, Downtime, Technology, Uptime | Tagged , , , , , | Leave a comment

Through the Storm – Interview with Arterian IT Founder Jamison West

Jamison West

"Having a comprehensive plan that's bigger than just IT is key, but often IT can be the forcing function to get you started."

I recently had a chance to interview Jamison West of Arterian. Jamison, who founded the company that is now Arterian in 1995, envisions a future where every small to mid-sized company will have an IT partner become a vital part of its core operations team keeping them free from disaster and flourishing.

SoftwareDisastersBlog: How do you help your customers prevent and prepare for IT disasters?

Jamison West: We see with our customers that reliance on connectivity is higher than it’s ever been for businesses to execute and support their customers. People now expect email to work like instant messaging, sent and received as fast as they type it.  We try to prevent IT issues  by adding redundancy to make sure that if there are problems — natural disasters or bad weather like we had recently in Seattle  — our customers are still up and running at least for critical operations.

Continue reading

Posted in Business Continuity, Cloud, Disaster Recovery, Downtime, Technology, Uptime | Tagged , , , , , | 1 Comment

Learning from the Costa Concordia Shipwreck

Costa Concodia

Photo from csmonitor.com

On Friday January 13th, the Costa Concordia had a disaster – running into rocks off the shores off Italy’s western coast and eventually rolling onto its side in the water.   The toll on human life is tragic – several are dead (the number still growing), more missing, and everyone involved went through a traumatic experience.  

The saddest part of the story is that it appears it could have been prevented.  And if not prevented, could have been handled better.

As I’ve been following the story and reflecting on it, a few things have jumped out that I think we can learn from. 

Human Error

Continue reading

Posted in Business Continuity, Cloud, Disaster Recovery, Downtime, Technology, Uptime | Tagged , , , , , , | 1 Comment