Developer
Featured Article |
Testing and Monitoring the
Performance of Your Web-Based Solutions |
 |
By Scott Mauvais, MCSE, MCSD,
MCDBA
You would think that today's blazingly fast
processors would have relegated performance problems to a
historical footnote. After all, I was able to develop crisp,
responsive Web apps running on a P133 with a whopping 64MB of
RAM. You would think that now that processors are roughly 20
times faster just in terms of raw MHz plus all the
architectural advances such as hyper-threading and more robust
pipelines, we would never see a slow application again.
Faster hardware, however, has allowed
us to tackle more complex problems. As an example, take
streaming video. I would never have considered creating a Web
site that included video on my old P133 but with today's
high-end hardware, broadcasting my own content is not out of
the question.
This complexity increases not only
the demand on the hardware, but also the number of moving
parts and hence the opportunities for something to go wrong
that will bring your application to a crawl. In this article,
I discuss some strategies you can use to make sure things
don't go wrong. |
The best resource these days for testing and
fine-tuning the performance of your Web site is Performance
Testing Microsoft� .NET Web Applications by the Microsoft
Application Consulting and Engineering (ACE) team. This is a
great book that not only covers testing tools and best
practices for using them but also dives into performance
monitoring and testing techniques. To get a feel for the book,
you can review the table
of contents and Chapter
2, "Preparing and Planning for the Performance Test," on
the Microsoft Press� site.
Performance testing�all testing for that matter�involves
five basic steps:
- Set the goal.
- Measure the progress against the goal.
- Analyze the results.
- Tune and debug the application as necessary.
- Repeat steps 2 to 4.
In this article, I'm going to focus on the first two steps.
Analyzing the results is outside the scope of this article and
is covered extremely well in Performance
Testing Microsoft .NET Web Applications. For obvious
reasons, a discussion of tuning and debugging is
application-specific, so I won't cover that here either. At
the end of the article, I'll cover two common pitfalls of
performance testing and suggest a strategy to prevent them
from affecting your
project.
|
Performance Testing in the Wild
|
Whenever I am called out to look at a performance problem
for an application (which always is already in
production�nobody seems to realize they have problems until
they have already released the app), I always ask the same,
simple question: �How much are you underperforming by?� Rather
than getting a simple answer such as 20 percent, I always get
complicated answers that list everything they have tried and
everything they have done to isolate the problem. I usually
listen for a polite period of time and then interrupt and ask
the same question but slightly differently: �So, what sort of
performance do you need to achieve?� After some back and
forth, it becomes clear that they have no idea.
In a reactive situation such as this, the best you can do
is to start looking for obvious problems (blocking in the
database, thread contention on the Web server, and so on) and
trying to resolve the specific issues, hoping that they
address the overall performance problem. Rather than trying to
fix bad performance, a better approach is to prevent it from
happening in the first place. The first step to preventing
performance problems is to set your performance goal up
front. |
|
Setting
Your Performance Goal |
The main reason to set the performance goal at the
beginning of your project is that performance problems are
very expensive to fix late in the project. Occasionally,
applications perform poorly because of a poor choice of
algorithms or because a feature is misused or misconfigured.
Typically, however, applications fail to meet performance
expectations because of design problems.
Setting your goal at the beginning is also important
because it provides a stake in the ground and gives the entire
team a clearly defined goal to strive for. Without such a
goal, an individual dev lead has no way of knowing whether his
or her component is fast enough and scales well enough. As a
result, some teams will deliver components that don't perform
well, while others will waste valuable resources tuning
components that are already good enough.
Now that you have decided to set your goal, how do you go
about articulating it? The most common measure of performance
is concurrent users. More sophisticated measures include
response time. You should always use both aspects in your
goal. Response time is measured in time to first byte (TTFB)
and time to last byte (TTLB), which measure the delay between
the user's request and when the client receives the first and
last byte, respectively. Just because your Web site can
support 100,000 concurrent users doesn't mean it is
successful, especially if it requires 10 minutes to render
each page.
Of course, defining exactly what a user is and what actions
he or she performs can be complicated, especially when one
thinks of Web services or EAI applications. The best way to
approach this task is to work backwards from the business
requirements. First off, putting performance testing in terms
of meeting your business goals is an effective way to ensure
you will get funding for it. Second, your users are
interacting with your Web site in a way that results in them
performing some action that is beneficial to your business (if
not, maybe you need to reassess the entire project), so this
is a natural place to start working backwards from. This
action may involve downloading the newest trial version of
your software, querying your knowledge base, updating the
shipment status of one of your orders, and so on. To make
things simple, let's assume you have a typical e-commerce site
and the business driver is orders per day.
When you have defined your business driver, the next step
is to measure your application's performance. |
|
Measuring Your Progress Against the Goal |
There are two pieces to measuring performance: a usage
profile that defines the behavior of your users on your site
and tools that simulate these users and monitor your
application's performance.
A user profile lists all the actions a user can perform on
your site. In our example, placing an order is the business
driver, so the usage profile defines the steps required for a
user to place an order. Your user profile would then include
actions such as logging into your site, searching for an item,
placing it in the shopping basket, and checking out. When you
are constructing your user profile, don't forget to include
the browse-to-buy ratio and a user's think time. The
browse-to-buy ratio defines how many shoppers visit your site
compared to how many actually buy something. Think time refers
to the length of time a user spends on a page before clicking
on to the next one. Remember, the length of time a user spends
on a page varies depending on its function. In other words,
your users probably spend more time on your product
description pages than they do on search results.
There are many good resources on constructing user
profiles, so I won't go into more detail here. A good place to
start is my earlier article "High
Performance Web Sites from the Ground Up," which includes
a sample profile as well as several links to other
documents.
The next step is to load this profile into your suite of
stress tools and start testing your application. For testing
Microsoft .NET Web applications, I use Microsoft Application
Center Test (ACT) to generate the load and to perform the
initial analysis. ACT is a software-based testing tool that
ships as part of Microsoft Visual Studio� .NET Enterprise and
Architect editions. Besides simply generating load on your
servers, it also captures performance metrics so that you can
analyze and diagnose many problems right in the tool. The best
part of ACT is that it understands the intricacies of the .NET
architecture such as cookieless sessions and view state.
Better yet, because it ships with Microsoft Visual Studio
.NET, you probably already have it installed. You can learn
more about ACT by reading the product
documentation on MSDN�.
The real strength of Performance
Testing Microsoft .NET Web Applications are its chapters
on ACT, so I won't go into more detail here because there is
really nothing the authors missed. It provides a great
step-by-step description of how to use ACT; it also describes
some of its more complex usages such as driving load directly
against your data tier. It also has chapters devoted to tools
such as Performance Monitor and Network Monitor that ship with
Microsoft Windows� 2000. |
Setting
Your Performance Goal |
While this may sound like the instructions from a shampoo
bottle, repeating your tests is essential. Performance
problems rarely show up overnight. It's not like a project is
going fine and then all of a sudden performance drops off. In
most projects with performance problems, you will see a slow
degradation of performance as more and more features are added
and code that was originally stubbed out gets implemented. If
this pattern fits your projects, it will be important to run
your performance tests regularly so that you can start to do
some trend analysis.
There are, however, cases in which performance really does
change overnight. Maybe someone just checked in a major change
to a core component or maybe one of the DBAs tried
(unsuccessfully) to tune a stored procedure. In cases like
this, it's even more important to test regularly so that you
can catch these errors right after they happen. First off, the
sooner you catch an error like this, the less likely you are
to have dependencies build on top of this underperforming
code. Second, if you catch it right away, the developer will
still remember what he or she did. It's much easier to debug
code written yesterday than code written last month.
The most successful performance teams integrate performance
testing with their daily build verification tests. Using this
approach, they run every build through a battery of
performance tests. These tests focus on critical components
and sections of code that are currently undergoing a lot of
churn. On a periodic basis, usually weekly, the performance
team performs more extensive, long-haul tests that provide
more coverage and specifically look for memory leaks and other
types of resource depletion. |
|
Keeping an
Eye on Pitfalls |
While performance
testing is almost always a good thing, there are some common
pitfalls that you need to keep your eye on. The first, and
most common, problem is that performance testing starts too
late. I have touched on this a bit, so the impact should be
pretty obvious. If this happens to you, you'll either run out
of time and be forced to release your code before you can be
sure it meets the performance requirements, or you will find
that you have built a large portion of your application on a
faulty design.
Where the first
pitfall results from lack of planning around performance,
over-planning causes the second. I have been involved in a
handful of projects in which the approach to performance
testing was so complex that the team became bogged down in
analysis as they tried to test every case. The end result is
quite similar to the teams that started too late: the app
either shipped without adequate testing or it was late in the
project before the performance team discovered serious design
problems.
Fortunately, I have
a simple, easy-to-implement rule that I have used to address
both pitfalls: just like functional testing, actual
performance testing (not just planning but real tests) must
start the same day that development begins. This approach has
two benefits: it ensures that performance testing does not
start too late, and it avoids analysis paralysis. If the
performance team starts testing on day one, they will focus
their efforts on the most important scenarios and will address
less commonly used features as time permits.
|
For More
Information |
In this article, I started off with a discussion of the
increased complexity of today's highly distributed Web
applications and the importance this places on performance
testing. I discussed the importance of setting a performance
goal so that you can address any performance problems
proactively rather than waiting until they become show
stoppers in production. I then walked you through the process
of using the business drivers to set your performance goal.
From there I moved on to a discussion of user profiles and
testing tools such as Microsoft ACT. Finally, I discussed the
importance of running your performance tests regularly and
some strategies that I use to ensure that performance testing
teams are successful.
As I mentioned at the beginning of this article, the best
place to learn about performance testing is Performance
Testing Microsoft .NET Web Applications, by the Microsoft
Application Consulting and Engineering team. You'll learn how
to take advantage of the best available tools to plan and
execute performance tests, configure profile tools, and
analyze performance data from your presentation tier, through
your business logic, all the way to your data tier using the
same methodology that Microsoft uses to stress test its own
sites.
You might also want to check out the following Microsoft
Press resources, which provide in-depth documentation for all
issues related to developing high-performance
applications:
- Inside
Microsoft Windows 2000, Third Edition, provides the
definitive guide to the internals of Windows 2000. The
Microsoft product team wrote this book with full access to
the source code, so you know you are getting the most
comprehensive, technical information available.
- Programming
Server-Side Applications for Microsoft Windows 2000
takes an in-depth tour of Windows 2000 services and provides
expert guidance for designing and implementing applications
that exploit their capabilities.
- Microsoft
SQL Server� 2000 Performance Tuning Technical Reference,
which provides the best source of practical information you
need to configure and tune a Microsoft SQL Server 2000
database for better, faster, and more scalable solutions.
The best part of this book is that it goes beyond just
Microsoft SQL Server 2000 and looks at optimizing the
underlying operating system and hardware.
- Microsoft
Windows 2000 Server Resource Kit is the best resource
for configuring and optimizing Microsoft Windows. While I it
is geared mainly for administrators, it includes many, many
sections that are must-reads for developers who want to
understand what Microsoft Windows is doing under the
hood.
Microsoft Press provides in-depth documentation for these
and all the other issues related to developing for .NET. For a
complete list of .NET titles from Microsoft Press, see the Inside
Information About Microsoft .NET
page. |
 |
 |
|
Last Updated: Tuesday, November 5, 2002 |
| |
 |
 |
|