// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Security’ Category

At Last, My Web Applications Will Be Totally Secure!?

Saturday, February 27th, 2010

Yet another vendor attempts to reduce application security to something that can be purchased.

How to Hacker-Proof Your Web Applications,” was the amusing subject of an email I received recently.  I’m sure that it wasn’t meant to be amusing.  I suppose I just have a strange sense of humor.

The source of the email was a company that I consider to be reputable, though this could lead me to reconsider that opinion.  I won’t single out the organization since hyperbole apparently continues to be a requirement to sell most anything.

I have to wonder though, does anyone actually read a subject line like that and then open the email fully expecting to be presented with a product or service that does what the subject states?  I certainly hope not.  Let’s explore the meaning of the message and then we’ll see if the email content led me to such a nirvana.

Your Web Applications” covers every piece of software I have that presents a web interface.  This includes my traditional HTTP/HTML-based applications as well as web services.  These applications may be based on a variety of technologies such as .NET, Java, PERL and Ruby.  They include third-party libraries and frameworks.  Further, they are hosted on some form of hardware running some operating system.  Clearly this claim applies to a wide and deep world of application infrastructures and architectures.

Hacker-Proof” means that no attacker will be able to successfully exploit the applications.  That is quite a promise.  By opening this email I’m going to find out what is necessary to prevent all successful exploits for my entire set of web facing applications?  This is great news! (more…)

Man in the Middle? No, Just Your Carrier

Saturday, January 23rd, 2010

As you may be aware, several individuals using AT&T-based cellular phones recently reported being logged into the wrong Facebook account when accessing the Facebook site from their phones.  Current reports indicate that the root cause is AT&T’s network, which misdirected Facebook cookies.  These cookies, set to reflect that an individual has logged in, are to be stored on each user’s device.  Is this issue a cause for concern?  Is the issue likely limited to Facebook?  Does Facebook bear any responsibility?

In terms of concern, I’d say there is cause for major concern. We implicitly trust that the single request/response interaction between the browser and the server must be represented by a single network connection.  Unless an attacker inserts him or her self into the virtual connection circuit, the server’s response to the browser (containing the cookie) must be the same connection that sent the original credentials.

In this case that trust appears to be misplaced.  It is easy to understand how this is possible.  The carriers are free to manage connections however they choose.  In reality the carrier is likely proxying between the cellular network and the Internet, like any NAT-based approach.  A little coding error, such as an improperly shared resource, and results destined for one phone are returned to another.

Classically this seems like a race condition.  Certainly in the latest incident that explanation seems consistent with the facts since the two people who ended up on each other’s Facebook accounts were on-line at the same time.  Nothing particularly interesting about multi-threaded code containing a race condition.  It has happened before and will happen again.

This leads me to my second question, is this likely limited to Facebook users on the AT&T network? That seems doubtful.  It is hard to imagine that the carrier’s infrastructure that proxies requests includes specialized instructions just for Facebook.  it seems very likely that any connection-related flaw can occur for any web interaction.

(more…)

Anticipating the Business Rules Forum – 2008

Thursday, October 23rd, 2008

I am looking forward to this year’s Business Rules Forum (BRF – http://www.businessrulesforum.com/).  This is the 11th year this international gathering has occurred.   It is interesting to listen to the real-world experiences that people are having as they leverage business process gathering and execution environments.

Once again I have been given the opportunity to co-present a session.  This year Mike Strianese and I will be discussing the importance of security when leveraging web services.  The Service Oriented Architecture (SOA) approach has gained a lot of traction in rule engine and workflow environments.  With the accelerated pace of integration and application deployments comes an increased risk of vulnerabilities and exploits.

Beyond the informative sessions, the BRF provides a great showcase to see and experience a broad range of tools and services available for documenting, testing, integrating and deploying rule-based solutions.  This is an efficient way to gain a lot of insight into the operation of the diverse offerings.

I am a strong advocate for leveraging rule and workflow engines to accelerate application development.  In the same way relational databases free us from creating a lot of code to manipulate data in files, these environments simplify certain aspects of enterprise business system development.  It has become rare to encounter a financial or healthcare-related enterprise that does not leverage at least one of these tools.

If you happen to be at the BRF next week I invite you to stop by our session (Security of Services, Thursday at 9:05 am – http://www.businessrulesforum.com/abstracts.php?id=350).  Feel free to introduce yourself afterwards as well! 

Applying “Security in Depth” Requires Documentation and Cooperation

Monday, August 11th, 2008

When applied to software, the principle of Security in Depth helps us mitigate the fact that we will always produce flawed applications. In this case the flaws of which I am writing relate to the security of the application. Although we should always strive to create secure code, the fact is that exploitable vulnerabilities will eventually be discovered in our programs.

Certainly the odds of an exploit being discovered are based on the complexity of the application, the value of the data accessible through the exploited application and the quality of the security-related focus given to the application during the SDLC. Note that the value of the data accessible through the exploited application is not necessarily limited to what the application is supposed to access, but rather what is exposed by the exploit.

Since we know that we cannot produce “perfect” code, we need to plan for minimizing the impact of a successful exploit. In other words, we strive to make the value of the data accessible through the exploited application no greater than the value of the data the application is intended to access by an authenticated and authorized user. Security in Depth is a powerful approach to meeting this objective.

Security in Depth requires that each tier in our application enforce the same restrictions on information flow as the other tiers. For example, if our front-end is restricting input to a maximum length and escaping HTML and JavaScript strings, then the object and the persistence layers should do likewise. If a user is restricted to accessing certain data as part of the application’s operation, the database management software should enforce the same limitations.

(more…)

Making the Composition of Secure Software Second Nature

Wednesday, August 6th, 2008

For all the publicity surrounding software vulnerabilities and successful exploits, we as an industry don’t seem to be rigorously incorporating effective approaches for making software more secure. Many resources are available to educate people involved in software creation regarding security. Further, tools and libraries exist which simplify the redundant aspects of secure software composition.

There are also well-known concepts that have been applied to physical and network security for many years which are equally effective when designing and implementing software. Concepts such as Security in Depth, Least Privilege, Segregation of Duties and Audit Trails serve us well when applied to programming. Tools such as the OWASP and Apache libraries simplify the process of sanitizing and normalizing data received from users and external systems. Automated inspection tools, static and dynamic, help us to identify and remove vulnerabilities in our implementations.

Those who architect, design, implement and test software must understand the typical risks created by different aspects of an application’s architecture in order to leverage appropriate techniques to reduce the likelihood of a vulnerability being released. Further, we should assume vulnerabilities will be found and exploited. Our designs must include features to limit the extent of damage such an exploit would create.

In this category’s posts I will concentrate on the classifications of vulnerabilities and effective techniques that we should apply when creating software. Hopefully this will help to raise awareness surrounding security-centric due diligence that is expected from those of us who participate in the process of authoring software.