// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Software Security’ Category

Heartbleed – A High-level Look

Saturday, April 12th, 2014

HeartbleedThere has been a lot of information flying about on the Internet concerning the Heartbleed vulnerability in the OpenSSL library. Among system administrators and software developers there is a good understanding of exactly what happened, the potential data losses and proper mitigation processes. However, I’ve seen some inaccurate descriptions and discussion in less technical settings.

I thought I would attempt to explain the Heartbleed issue at a high level without focusing on the implementation details. My goal is to help IT and business leaders understand a little bit about how the vulnerability is exploited, why it puts sensitive information at risk and how this relates to their own software development shops.

Heartbleed is a good case study for developers who don’t always worry about data security, feeling that attacks are hard and vulnerabilities are rare. This should serve as a wake-up-call that programs need to be tested in two ways – for use cases and misuse cases. We often focus on use cases, “does the program do what we want it to do?” Less frequently do we test for misuse cases, “does the program do things we don’t want it to do?” We need to do more of the latter.

BusinessSecurityBrief: Heartbleed - TitleSlideI’ve created a 10 minute video to walk through Heartbleed. It includes the parable of a “trusting change machine.” The parable is meant to explain the Heartbleed mechanics without requiring that the viewer be an expert in programming or data encryption.

If you have thoughts about ways to clarify concepts like Heartbleed to a wider audience, please feel free to comment. Data security requires cooperation throughout an organization. Effective and accurate communication is vital to achieving that cooperation.

Here are the links mentioned in the video:

Fuzzing – A Powerful Technique for Software Security Testing

Friday, January 21st, 2011

I was participating in a code review today and was reminded by a senior architect, who started working as an intern for me years ago, of a testing technique I had used with one of his first programs.  He had been assigned to create a basic web application that collected some data from a user and wrote it to a database.  He came into my office, announced it was done and proudly showed it to me.  I walked over to the keyboard, entered a bunch of junk and got a segmentation fault in response.

Although I didn’t have a name for it, that was a standard technique I used when evaluating applications.  After all, the tried and true paths, expected inputs and easy errors will be tested early and often as the developer exercises the application using the basic use cases.  As Boris Beizer said, “The high-probability paths are always tested if only to demonstrate that the system works properly.” (Beizer, Boris. Software Testing Techniques. Boston, MA: Thomson Computer Press, 1990: 76.)

It is unexpected input that is useful when looking to find untested paths through the code. If someone shows me an application for evaluation the last thing I need to worry about is using it in an expected fashion, everyone else will do that.  In fact, I default to entering data outside the specification when looking at a new application.  I don’t know that my team always appreciates the approach.  They’d probably like to see the application work at least once while I’m in the room.

These days there is a formal name for testing of this type, fuzzing.  A few years ago I preferred calling it “gorilla testing” since I liked the mental picture of beating on the application. (Remember the American Tourister luggage ad in the 1970s?)  But alas, it appears that fuzzing has become the accepted term.

Fuzzing involves passing input that breaks the expected input “rules”.  Those rules could come from some formal requirements, such as a RFC, or informal requirements, such as the set of parameters accepted by an application.  Fuzzing tools can use formal standards, extracted patterns and even randomly generated inputs to test an application’s resilience against unexpected or illegal input.

(more…)

SQL Injection – Why Does Our Profession Continue to Build Applications that Support It?

Monday, August 23rd, 2010

SQL Injection is commonly given as a  root cause when news sites report about stolen data. Here are a few recent headlines for articles describing data loss related to SQL injection: Hackers steal customer data by accessing supermarket database1, Hacker swipes details of 4m Pirate Bay users2, and Mass Web Attack Hits Wall Street Journal, Jerusalem Post3. I understand that SQL injection is prevalent; I just don’t understand why developers continue to write code that offers this avenue to attackers.

From my point of view SQL injection is very well understood and has been for many years. There is no excuse for a programmer to create code that allows for such an attack to succeed. For me this issue falls squarely on the shoulders of people writing applications. If you do not understand the mechanics of SQL injection and don’t know how to effectively prevent it then you shouldn’t be writing software.

The mechanics of SQL injection are very simple. If input from outside an application is incorporated into a SQL statement as literal text, a potential SQL injection vulnerability is created. Specifically, if a parameter value is retrieved from user input and appended into a SQL statement which is then passed on to the RDBMS, the parameter’s value can be set by an attacker to alter the meaning of the original SQL statement.

Note that this attack is not difficult to engineer, complicated to execute or a risk only with web-based applications. There are tools to quickly locate and attack vulnerable applications. Also note that using encrypted channels (e.g. HTTPS) does nothing to prevent this attack. The issue is not related to encrypting the data in transit, rather, it is about keeping the untrusted data away from the backend RDMBS’ interpretation environment.

Here is a simple example of how SQL injection works. Assume we have an application that accepts a last name which will be used to search a database for contact information. The program takes the input, stores it in a variable called lastName, and creates a query:

String sql = "select * from contact_info where lname = '" + lastName + "'";

Now, if an attacker tries the input of: ‘ or 1=1 or ’2′=’

It will create a SQL statement of:

select * from contact_info where lname = '' or 1=1 or '2'=''

This is a legal SQL statement and will retrieve all the rows from the contact_info table. This might expose a lot of data or possibly crash the environment (a denial of service attack). In any case, using other SQL keywords, particularly UNION, the attacker can now explore the database, including other tables and schemas.

(more…)

Destination Reached: CISSP

Friday, July 2nd, 2010

CISSP logoI am happy to report that I have been awarded the Certified Information Systems Security Professional (CISSP) by the International Information Systems Security Certification Consortium [(ISC)2]a.

I started pursuing the certification in mid-2009, got serious about studying early this year (2010), took the exam in late April, was notified that I passed and had my background endorsed in May, had to update my resume for an auditor in early June and was awarded the CISSP designation at the end of June.

I felt that this certification was important both professionally and personally.

Professionally, the certification serves as a validation that I have a solid and broad understanding of information systems’ security.  People who have worked with me know that I have been focused on IS security for many years.

Whether performing security-centered code reviews, fixing flawed implementations or teaching designers and developers how to improve the security of their systems, I have been on a mission to mentor and train people to observe effective security practices and principles.  I’ve also had operational responsibility for system infrastructures.  With that experience I was able to pass GIAC’s GSEC and Red Hat’s RHCE exams several years ago.

Personally, the process of studying and passing the exam allowed me to pursue and attain a non-trivial goal.  I am enrolled and taking classes toward my master’s degree, but completing that work will require several more years of part time attendance.  Setting and achieving intermediate goals helps to keep me focused and learning.

If you are wondering what the CISSP is all about, please read on.

(more…)

At Last, My Web Applications Will Be Totally Secure!?

Saturday, February 27th, 2010

Yet another vendor attempts to reduce application security to something that can be purchased.

How to Hacker-Proof Your Web Applications,” was the amusing subject of an email I received recently.  I’m sure that it wasn’t meant to be amusing.  I suppose I just have a strange sense of humor.

The source of the email was a company that I consider to be reputable, though this could lead me to reconsider that opinion.  I won’t single out the organization since hyperbole apparently continues to be a requirement to sell most anything.

I have to wonder though, does anyone actually read a subject line like that and then open the email fully expecting to be presented with a product or service that does what the subject states?  I certainly hope not.  Let’s explore the meaning of the message and then we’ll see if the email content led me to such a nirvana.

Your Web Applications” covers every piece of software I have that presents a web interface.  This includes my traditional HTTP/HTML-based applications as well as web services.  These applications may be based on a variety of technologies such as .NET, Java, PERL and Ruby.  They include third-party libraries and frameworks.  Further, they are hosted on some form of hardware running some operating system.  Clearly this claim applies to a wide and deep world of application infrastructures and architectures.

Hacker-Proof” means that no attacker will be able to successfully exploit the applications.  That is quite a promise.  By opening this email I’m going to find out what is necessary to prevent all successful exploits for my entire set of web facing applications?  This is great news! (more…)

Man in the Middle? No, Just Your Carrier

Saturday, January 23rd, 2010

As you may be aware, several individuals using AT&T-based cellular phones recently reported being logged into the wrong Facebook account when accessing the Facebook site from their phones.  Current reports indicate that the root cause is AT&T’s network, which misdirected Facebook cookies.  These cookies, set to reflect that an individual has logged in, are to be stored on each user’s device.  Is this issue a cause for concern?  Is the issue likely limited to Facebook?  Does Facebook bear any responsibility?

In terms of concern, I’d say there is cause for major concern. We implicitly trust that the single request/response interaction between the browser and the server must be represented by a single network connection.  Unless an attacker inserts him or her self into the virtual connection circuit, the server’s response to the browser (containing the cookie) must be the same connection that sent the original credentials.

In this case that trust appears to be misplaced.  It is easy to understand how this is possible.  The carriers are free to manage connections however they choose.  In reality the carrier is likely proxying between the cellular network and the Internet, like any NAT-based approach.  A little coding error, such as an improperly shared resource, and results destined for one phone are returned to another.

Classically this seems like a race condition.  Certainly in the latest incident that explanation seems consistent with the facts since the two people who ended up on each other’s Facebook accounts were on-line at the same time.  Nothing particularly interesting about multi-threaded code containing a race condition.  It has happened before and will happen again.

This leads me to my second question, is this likely limited to Facebook users on the AT&T network? That seems doubtful.  It is hard to imagine that the carrier’s infrastructure that proxies requests includes specialized instructions just for Facebook.  it seems very likely that any connection-related flaw can occur for any web interaction.

(more…)

Applying “Security in Depth” Requires Documentation and Cooperation

Monday, August 11th, 2008

When applied to software, the principle of Security in Depth helps us mitigate the fact that we will always produce flawed applications. In this case the flaws of which I am writing relate to the security of the application. Although we should always strive to create secure code, the fact is that exploitable vulnerabilities will eventually be discovered in our programs.

Certainly the odds of an exploit being discovered are based on the complexity of the application, the value of the data accessible through the exploited application and the quality of the security-related focus given to the application during the SDLC. Note that the value of the data accessible through the exploited application is not necessarily limited to what the application is supposed to access, but rather what is exposed by the exploit.

Since we know that we cannot produce “perfect” code, we need to plan for minimizing the impact of a successful exploit. In other words, we strive to make the value of the data accessible through the exploited application no greater than the value of the data the application is intended to access by an authenticated and authorized user. Security in Depth is a powerful approach to meeting this objective.

Security in Depth requires that each tier in our application enforce the same restrictions on information flow as the other tiers. For example, if our front-end is restricting input to a maximum length and escaping HTML and JavaScript strings, then the object and the persistence layers should do likewise. If a user is restricted to accessing certain data as part of the application’s operation, the database management software should enforce the same limitations.

(more…)

Making the Composition of Secure Software Second Nature

Wednesday, August 6th, 2008

For all the publicity surrounding software vulnerabilities and successful exploits, we as an industry don’t seem to be rigorously incorporating effective approaches for making software more secure. Many resources are available to educate people involved in software creation regarding security. Further, tools and libraries exist which simplify the redundant aspects of secure software composition.

There are also well-known concepts that have been applied to physical and network security for many years which are equally effective when designing and implementing software. Concepts such as Security in Depth, Least Privilege, Segregation of Duties and Audit Trails serve us well when applied to programming. Tools such as the OWASP and Apache libraries simplify the process of sanitizing and normalizing data received from users and external systems. Automated inspection tools, static and dynamic, help us to identify and remove vulnerabilities in our implementations.

Those who architect, design, implement and test software must understand the typical risks created by different aspects of an application’s architecture in order to leverage appropriate techniques to reduce the likelihood of a vulnerability being released. Further, we should assume vulnerabilities will be found and exploited. Our designs must include features to limit the extent of damage such an exploit would create.

In this category’s posts I will concentrate on the classifications of vulnerabilities and effective techniques that we should apply when creating software. Hopefully this will help to raise awareness surrounding security-centric due diligence that is expected from those of us who participate in the process of authoring software.