Developer Feature
Article |
Enforcing Security in Multi-Tier Applications
|
 |
By Scott Mauvais
Most
developers and IT professionals I talk to these days are very
focused on security. This surge of interest in security was
triggered by the recent Code Red and Nimda worm attacks, which
exploited a vulnerability in IIS-enabled systems. These
malicious attacks were a real wake-up call for IT departments
as network administrators and developers realized they had not
done everything they should have to secure their systems
against Internet-based threats. For system administrators, the
task facing them is relatively clear cut: they simply need to
make sure their systems are locked down. Now, clear cut does
not mean easy. For large networks this task can be quite
daunting. To help out administrators, Microsoft responded with
the Strategic
Technology Protection Program to ensure customers have
applied the most current service packs and hot fixes and are
following best practices when securing their vital corporate
assets.
|
As a developer, the process of
ensuring your applications are secure is a bit more complex.
There are no hot fixes you can apply and no settings you can
tweak as security is built into your applications, not layered
on top if it. Accordingly, you need to be thinking about
security when you first start envisioning your application.
Unfortunately, most developers approach security as a feature
to be added later in the project. The best way to learn about
building secure applications is to read Michael Howard and
David LeBlanc's new book Writing
Secure Code. If you don't have it already, you should also
get a copy of Michael's earlier book Designing
Secure Web-Based Applications for Microsoft Windows 2000.
Michael is a program manager on the Windows 2000 security
team, so he provides unique insight into making the most of
the Windows security model.
In this article, I want to help you
decide where you want to enforce your security. I cover in
brief the golden rules of security and identify which native
Windows services map to them. The major part of the article,
however, is dedicated to a topic that often confuses people
and leads to some of the strongest disagreements among IT
professionals and developers: the benefits and drawbacks of
enforcing security in the middle (or business) tier versus the
data tier. |
Defining
Security |
The main goal of security is quite simple: ensure that
users access an application only in the manner intended. To
achieve this goal, a robust security system has to implement
the following six security services:
- Authentication. The system has to be
able to identify users (including trusted systems) and
verify that they are who they claim to be.
- Authorization. Once the system has
authenticated a client, it has to determine whether that
particular client can access a given resource.
- Audit. The system has to log all
clients' actions. These audit trails serve two functions:
they can be used to spot attacks on your site (for example,
when an administrator notices a series of failed login
attempts over a short period of time from a particular
address) and as a deterrent (for example, users will be less
likely to attempt to access the CEO's private home directly
if they know all attempts are logged).
- Privacy. The system has to protect
sensitive data so that it is not disclosed to unauthorized
people.
- Integrity. To be credible, the system
has to ensure the validity of the data by making sure it is
complete, accurate, available, and has not been manipulated.
(Some people view availability as a separate function that a
security subsystem must provide, so you will sometimes see
it listed as the seventh rule. I, on the other hand,
consider the availability of data to be inseparable from its
integrity, so I merge them together.)
- Non-repudiation. The system must
provide evidence that an action has occurred and prevent a
user from denying he or she was responsible for it.
The following table identifies some Windows 2000
technologies that enable you to implement the six golden rules
of security. Although this is not an exhaustive list, it does
give you a feel for the types of approaches upon which most
applications rely.
Table 1 |
Windows 2000 Security
Technologies |
Security Service |
Technology |
Authentication |
Kerberos Windows NT Challenge/Response Basic
Authentication Digest X.509 certificates |
Authorization |
Windows Users and Groups Access control lists
(ACLs) SQL Server Permissions Web access
permissions COM+ roles |
Audit |
Windows 2000 Security Event Logs IIS Web
logs SQL Server logs and Profiler traces |
Privacy |
SSL/TLS IPSec Encrypting File System
(EFS) X.509 certificates |
Integrity and Availability |
SSL/TLS IPSec Digital Signatures X.509
certificates Microsoft Clustering Services
(MSCS) Network Load Balances (NLB) Windows Load
Balancing Services (WLBS) |
Nonrepudiation |
Audit logs | |
|
Enforcing
Security |
When designing the security model of your system, you have
to decide early on where you want to enforce security. Many
articles and white papers get mired in complex discussions
that cover all the nuances of the various choices depending on
the development language used, the size of the team, the
target platform, and so on. In the end, every one of these
choices fits into one of the three tiers of your typical
WinDNA or .NET architecture: presentation, middle, and data.
For obvious reasons (difficulty to control, ability to spoof,
and so on) the presentation tier is unacceptable for enforcing
security. This leaves us with a choice between the middle and
data tiers. |
|
The
Benefits of Checking Security at the Door |
Many people assume that the data tier is the best place to
enforce security�after all, it is your data that you are
trying to protect. However, I maintain that you are far better
off enforcing security in the business tier using COM+ and
configuring the database so that your COM+ application can
access it under its own identity. The database then trusts
COM+ to correctly authenticate and authorize individual users.
Once a client has been authenticated and authorized by COM+,
the client's ability to manipulate the data is limited by the
functionality that the method call provides.
The benefits of enforcing security in the middle tier
include improved performance, centralized control of security,
better security, easier error handling, and increased system
flexibility. The following sections examine each of these
benefits in some detail. |
|
Increased
Performance |
Performance is your biggest gain when enforcing security in
the middle tier. If all the clients are accessing the database
through a COM+ object that then connects to the database under
its own identity, each connection to the database is
identical, enabling you to take advantage of connection
pooling. By contrast, if each client were to connect to the
database with its own credentials, each client would require
its own connection, thus greatly increasing the load on both
the middle tier and the database. Furthermore, enforcing
security in the middle tier frees the database from having to
use complex security models. This is important for the
following reason: although you can easily scale out the middle
tier, there is usually a single instance of the data store, so
you want to minimize the load on it by shifting processing
elsewhere if possible.
As an additional performance boost, enforcing security in
the middle tier also reduces the load on your application
because any attempt to access resources without permission
will be detected earlier and rejected before additional
processing can take place. Take, for instance, the standard
CS101 banking example of role-based security. In this
scenario, tellers are able to process deposits, withdrawals,
and transfers up to $10,000, while only managers can authorize
transfers over $10,000. If you were to enforce this security
model on the database level, you probably would check the
value of the amount parameter passed into the stored procedure
that implements the transfer and compare it to the maximum
allowed for the caller's role.
The problem with this approach is that the transfer
operation taxes your system resources, and much of the work
has already been accomplished before the security check even
begins�before the stored procedure can even be called the
client must first connect to the database and then begin a
transaction. The database would respond by first allocating
memory and some threads to the connection and then locking
tables to prevent concurrent access during the transaction.
Only then does the stored procedure get around to performing
the security check. If the check fails, it needs to unwind all
this work. In this scenario, it would have been more efficient
to perform the security check at the front door before
beginning to process the request. |
|
Centralized Control |
Until now I have (intentionally) used data tier and
database interchangeably. Obviously, not all corporate data
lives in databases. Apart from databases you have mail stores
(Microsoft� Exchange, Lotus Notes), file systems (Microsoft
Excel and Word files, DBFs), directory services (Microsoft
Active Directory�, LDAP), and untold legacy applications. Even
when data is stored in relational databases, it is unlikely
that all the data you want to access is stored in a single
vendor's database product. Trying to coordinate security
across all these data stores requires a great deal of effort.
In the banking scenario, for example, imagine that the client
needs to log in to an accounts database on Microsoft SQL
Server 2000 to perform the actual transfer and the customer
database on DB/2 running on an AS/400 to perform fraud
detection. Here you need to coordinate two separate user
directories with their associated account lists and security
permissions.
Things could get even more complicated if you wanted to
determine whether a person is a teller or a manager by
checking his or her job title in Microsoft Active Directory.
What if you wanted to import the transactions from an Excel
file stored on a file share? You get the idea. You can
simplify the management significantly by performing the
security check in COM+. Here you have a single interface where
you can create roles, add users to roles, and define
permissions at any level of granularity down to the method
call.
That's nice for administrators but what about us
developers? As soon as the administrators have set up their
roles and permissions, COM+ does all the hard work of
authenticating the client and determining whether the user is
authorized to perform the requested action. Without COM+, you
have to call the authentication interfaces to each data and
pass the credentials in the proper format. |
|
Enhanced
Security |
I use the term proper format rather loosely. Each of
these authentication interfaces relies on a separate directory
for user authentication and authorization. Therefore, if you
want to log on to various systems, you will need to
impersonate the user in some manner. This usually means
connecting to a meta-directory or calling a proprietary
security API or obtaining a token of some sort (in some poorly
designed systems this �token� is nothing more than the user's
userid and password!). The problem here is that every time you
pass a user's credentials around you are opening up another
vulnerability that can be exploited by a knowledgeable hacker.
Remember, a knowledgeable hacker can be your contractor who
just left because his or her contract was not renewed or
someone who used to work for your vendor and knows those
proprietary APIs and default passwords. |
|
Easier
Error Handling |
The benefit of error handling is pretty straightforward. If
you are connecting to multiple data stores and relying on them
to perform security checks for you, you need to make sure your
code is ready to handle regular security failures throughout
your code. Rather than having a single point where you can
check for security errors, each section of code that requests
information from a back-end store needs to have an appropriate
handler that can trap the error and respond appropriately.
Worse, the data sources will throw these errors in a variety
of formats, so you need to be intimately familiar with the
format of an �access denied� message returned by, for example,
Microsoft SQL Server and DB/2. Apart from the obvious headache
this causes developers, it tightly couples your business and
data tiers. |
|
Increased
Flexibility |
If you look at security as a business rule, the reasons for
enforcing security in the middle tier become obvious. Business
process rather than technical requirements determines what is
privileged information. For example, next year's sales
forecast may be so sensitive today that only a few people in
the company can access it in its entirety. Six months from now
it may be available to the entire staff (but not outsiders) as
they track the company's progress toward its goals. In a year
it may be published in the investor relations section of your
Web site.
By contrast, if you enforce the security in the data store
itself, each time you want to alter users who could access it
you have to modify the security settings on the data store. If
multiple systems are involved (as is commonly the case with
sales forecasts), you have to update each of them separately,
taking care to keep them synchronized. Conversely with COM+,
all you need to do is add new user groups to the COM+ roles
you already defined. For example, you start off with
ExecutiveStaff, then you add the AllStaff group, and finally
you specify IUSR_MachineName. |
|
The
Drawbacks of Checking Security in the Middle Tier |
Although there are many benefits of performing security
checks in the middle tier, you should be aware of the
drawbacks of this method and how they impact sensitive data,
your ability to audit, and the use of multiple applications
that access a single data source. The following sections
describe these drawbacks. |
|
Sensitive
Data |
Most data stores offer fine-grained security mechanisms
that directly address the specifics of the type of data. For
example, in databases you can limit users to a specific set of
columns or rows in a table; in e-mail systems you can allow
users to read and reply but not forward a given message; in
file systems you can specify whether users can create as well
as modify files and so on. Depending on the type of data store
and the degree of sensitivity of its contents, you might find
that you need to apply security settings that are unique to
the data store. Typically, you can address this scenario by
performing most of the security checks in the middle tier and
handling the unique cases in the data store. |
|
Audit |
If all users come through the
middle tier and then connect to the back-end using the
security context of the COM+ application, you lose the ability
to audit the activity of the original clients. While this may
seem like a significant issue at first (and sometimes it is),
the question comes down to where you want your clients to log
in. The obvious benefit of auditing at the data store is that
you have a definitive record at the data source of all
modifications. The somewhat less obvious downside is the
difficulty of tracking and correlating client actions across
many back-end systems. In cases where auditing is an issue, I
usually encourage people to look closely at the available
log-in options in the middle tier and see whether they are
sufficient. In my experience, it is much easier to augment the
middle-tier login than it is to enforce security across
several back-end systems and correlate the audit logs. |
|
Multiple
Applications |
he challenge when using
multiple front-end applications is to ensure the security
policies are applied consistently. Typically, this is of
concern when companies are moving from a two-tier,
client/server to a multi-tier, distributed architecture. There
is no easy answer to this dilemma, because the correct
approach depends on the direction the company is pursuing. If
you expect your company to have a client/server model for some
time to come, you have to decide whether you want to trust
that each client application will properly enforce security,
or whether you want to take the pain of enforcing security by
coding it into the back end and centralizing it. Conversely,
if you are moving to a multi-tier model, you need to decide
when you will start migrating security from the back-end
systems and into the middle tier. |
|
The Bottom
Line |
If you are going to be living
with two-tier systems well into the future, you have some hard
trade-offs to make about security. You will either need to
accept the limitation of enforcing security on your back-end
systems or you need to build it into all your client
applications and risk inconsistent policies. Fortunately, most
companies are moving toward multi-tier applications, and here
the decision is easy: you should rely on COM+ and enforce
detailed security in the middle tier. This enables you to
utilize the full scalability that the COM+ security services
provide. As a rule of thumb, you should set out to enforce
security in the middle tier and move it to the data store only
if absolutely necessary, and only after fully weighing the
performance implications of doing so. In nearly every case,
you will find that you don't need to move security to the data
store, and when you do there are usually larger architectural
issues (such as client/server versus multi-tier) that you
should address first. |
|
Other
Resources |
As I mentioned at the beginning of the article, the best
place to learn about developing secure applications is to read
Michael Howard and David LeBlanc's new book Writing
Secure Code. You might also want to check out the
following Microsoft Press� resources, which provides in-depth
documentation for all issues related to developing secure
applications:
For a complete list
of all the developer books, see the Developer Tools
section. You will want to check Microsoft's security page
regularly to get the most up-to-date security information. For
the latest security information targeted directly at
developers, see the security section on MSDN�. |
|
 |
 |
|
Last Updated: Tuesday, October 30, 2001 |
| |
 |
 |
|