// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Posts Tagged ‘Security’

Heartbleed – A High-level Look

Saturday, April 12th, 2014

HeartbleedThere has been a lot of information flying about on the Internet concerning the Heartbleed vulnerability in the OpenSSL library. Among system administrators and software developers there is a good understanding of exactly what happened, the potential data losses and proper mitigation processes. However, I’ve seen some inaccurate descriptions and discussion in less technical settings.

I thought I would attempt to explain the Heartbleed issue at a high level without focusing on the implementation details. My goal is to help IT and business leaders understand a little bit about how the vulnerability is exploited, why it puts sensitive information at risk and how this relates to their own software development shops.

Heartbleed is a good case study for developers who don’t always worry about data security, feeling that attacks are hard and vulnerabilities are rare. This should serve as a wake-up-call that programs need to be tested in two ways – for use cases and misuse cases. We often focus on use cases, “does the program do what we want it to do?” Less frequently do we test for misuse cases, “does the program do things we don’t want it to do?” We need to do more of the latter.

BusinessSecurityBrief: Heartbleed - TitleSlideI’ve created a 10 minute video to walk through Heartbleed. It includes the parable of a “trusting change machine.” The parable is meant to explain the Heartbleed mechanics without requiring that the viewer be an expert in programming or data encryption.

If you have thoughts about ways to clarify concepts like Heartbleed to a wider audience, please feel free to comment. Data security requires cooperation throughout an organization. Effective and accurate communication is vital to achieving that cooperation.

Here are the links mentioned in the video:

Fuzzing – A Powerful Technique for Software Security Testing

Friday, January 21st, 2011

I was participating in a code review today and was reminded by a senior architect, who started working as an intern for me years ago, of a testing technique I had used with one of his first programs.  He had been assigned to create a basic web application that collected some data from a user and wrote it to a database.  He came into my office, announced it was done and proudly showed it to me.  I walked over to the keyboard, entered a bunch of junk and got a segmentation fault in response.

Although I didn’t have a name for it, that was a standard technique I used when evaluating applications.  After all, the tried and true paths, expected inputs and easy errors will be tested early and often as the developer exercises the application using the basic use cases.  As Boris Beizer said, “The high-probability paths are always tested if only to demonstrate that the system works properly.” (Beizer, Boris. Software Testing Techniques. Boston, MA: Thomson Computer Press, 1990: 76.)

It is unexpected input that is useful when looking to find untested paths through the code. If someone shows me an application for evaluation the last thing I need to worry about is using it in an expected fashion, everyone else will do that.  In fact, I default to entering data outside the specification when looking at a new application.  I don’t know that my team always appreciates the approach.  They’d probably like to see the application work at least once while I’m in the room.

These days there is a formal name for testing of this type, fuzzing.  A few years ago I preferred calling it “gorilla testing” since I liked the mental picture of beating on the application. (Remember the American Tourister luggage ad in the 1970s?)  But alas, it appears that fuzzing has become the accepted term.

Fuzzing involves passing input that breaks the expected input “rules”.  Those rules could come from some formal requirements, such as a RFC, or informal requirements, such as the set of parameters accepted by an application.  Fuzzing tools can use formal standards, extracted patterns and even randomly generated inputs to test an application’s resilience against unexpected or illegal input.

(more…)

Tag, You’re It!

Wednesday, January 12th, 2011

The Internet is full of examples of simplifications creating vulnerabilities.  A good number of these can be represented as indirection enablers.  IP addresses, domain names, URIs, tiny URLs, QR Codes and now Microsoft tags.  Each of these serves the purpose of simplifying and decoupling.  We have seen many exploits for the first four, what about these last two?

As you likely know, QR Codes and Microsoft tags are graphical images targeted at print media, though there is no reason they can’t be used in an online fashion.  They are most often presented as rectangular graphics (examples below).  The reason for using them is to provide an easy way for someone to access a web page (or other online resource) related to the printed content.  Since these images represent character data they can also be used to house information, like contact details, that do not require online access to interpret.

The use case is simple: install a special program that interprets the codes or tags; point the camera from a smart phone or computer at the graphic; and voilà, your phone presents a web page, phone number or other embedded content. Basically this avoids having to manually enter a URL.  Depending on a company’s marketing strategy this is a powerful feature since a particular ad might want to direct a person to a URL that embeds  information about the specific advertisement, media source, publication page and so forth.  Typing in a complicated URL would put off many people but this removes most of the effort while making the print media interactive.

The main issues with adoption are educating the public about the use of these codes and getting people to install the reader software.  Some of you may recall Radio Shack trying to do something similar several years ago.  They created a scanning device, given out for free, that people had to connect to their PCs.  They could then scan a specific item in a Radio Shack catalog or advertisement and be brought to a web page with detailed information and ordering instructions.

Although that particular attempt failed, these newer approaches have the advantages of being broadly available, leveraging a common accessory on a smart phone (camera) and providing benefits to more than one company.  It will be interesting to see if any of the competing standards catch on with the general public (beyond the two mentioned already there are others such as Data Matrix, Quickmark and PDF417).

My concern, however, isn’t whether these graphical links become popular, it is whether they present another security risk. I believe that they do, in a manner similar to Tiny URLs, yet possibly more insidious.

(more…)

2010 National Cybersecurity Awareness Month

Monday, October 4th, 2010

Welcome, October. There is a chill in the air here in the Northeast and visions of goblins, witches and ghosts are beginning to appear in front yards and on rooftops around the area. Although most of us associate these ideas with the paranormal, those same visions and chill serve to remind us to be ever vigilant when it comes to computer-based threats. So what better time of year to turn our attention to on-line phantoms such as viruses, worms and trojans?

NCSAM Banner

The National Cyber Security Alliance chose October as National Cybersecurity Awareness Month (NSCAM). Their website contains a lot of useful materials for businesses, educators and parents. This is a great resource to use as the basis for informing your company, family and self about on-line risks and effective practices for protecting yourself and others from on-line threats.

A core tenet of the alliance’s message is to “Get Involved” and my company, Blue Slate Solutions, is doing just that. Why should we get involved? Like many aspects of life, we are either part of the solution or part of the problem. Users of the Internet who do not understand on-line risks and fail to proactively protect themselves from being victims of cyberattacks become part of the problem.

(more…)

SQL Injection – Why Does Our Profession Continue to Build Applications that Support It?

Monday, August 23rd, 2010

SQL Injection is commonly given as a  root cause when news sites report about stolen data. Here are a few recent headlines for articles describing data loss related to SQL injection: Hackers steal customer data by accessing supermarket database1, Hacker swipes details of 4m Pirate Bay users2, and Mass Web Attack Hits Wall Street Journal, Jerusalem Post3. I understand that SQL injection is prevalent; I just don’t understand why developers continue to write code that offers this avenue to attackers.

From my point of view SQL injection is very well understood and has been for many years. There is no excuse for a programmer to create code that allows for such an attack to succeed. For me this issue falls squarely on the shoulders of people writing applications. If you do not understand the mechanics of SQL injection and don’t know how to effectively prevent it then you shouldn’t be writing software.

The mechanics of SQL injection are very simple. If input from outside an application is incorporated into a SQL statement as literal text, a potential SQL injection vulnerability is created. Specifically, if a parameter value is retrieved from user input and appended into a SQL statement which is then passed on to the RDBMS, the parameter’s value can be set by an attacker to alter the meaning of the original SQL statement.

Note that this attack is not difficult to engineer, complicated to execute or a risk only with web-based applications. There are tools to quickly locate and attack vulnerable applications. Also note that using encrypted channels (e.g. HTTPS) does nothing to prevent this attack. The issue is not related to encrypting the data in transit, rather, it is about keeping the untrusted data away from the backend RDMBS’ interpretation environment.

Here is a simple example of how SQL injection works. Assume we have an application that accepts a last name which will be used to search a database for contact information. The program takes the input, stores it in a variable called lastName, and creates a query:

String sql = "select * from contact_info where lname = '" + lastName + "'";

Now, if an attacker tries the input of: ‘ or 1=1 or ’2′=’

It will create a SQL statement of:

select * from contact_info where lname = '' or 1=1 or '2'=''

This is a legal SQL statement and will retrieve all the rows from the contact_info table. This might expose a lot of data or possibly crash the environment (a denial of service attack). In any case, using other SQL keywords, particularly UNION, the attacker can now explore the database, including other tables and schemas.

(more…)

Destination Reached: CISSP

Friday, July 2nd, 2010

CISSP logoI am happy to report that I have been awarded the Certified Information Systems Security Professional (CISSP) by the International Information Systems Security Certification Consortium [(ISC)2]a.

I started pursuing the certification in mid-2009, got serious about studying early this year (2010), took the exam in late April, was notified that I passed and had my background endorsed in May, had to update my resume for an auditor in early June and was awarded the CISSP designation at the end of June.

I felt that this certification was important both professionally and personally.

Professionally, the certification serves as a validation that I have a solid and broad understanding of information systems’ security.  People who have worked with me know that I have been focused on IS security for many years.

Whether performing security-centered code reviews, fixing flawed implementations or teaching designers and developers how to improve the security of their systems, I have been on a mission to mentor and train people to observe effective security practices and principles.  I’ve also had operational responsibility for system infrastructures.  With that experience I was able to pass GIAC’s GSEC and Red Hat’s RHCE exams several years ago.

Personally, the process of studying and passing the exam allowed me to pursue and attain a non-trivial goal.  I am enrolled and taking classes toward my master’s degree, but completing that work will require several more years of part time attendance.  Setting and achieving intermediate goals helps to keep me focused and learning.

If you are wondering what the CISSP is all about, please read on.

(more…)

Full Disk Encryption – A (Close to Home) Case Study

Wednesday, April 28th, 2010

This is a follow-up to my previous entry regarding full disk encryption (see: http://monead.com/blog/?p=319).  In this entry I’ll look at Blue Slate’s experience with rolling out full disk encryption company-wide.

Blue Slate began experimenting with full disk encryption in 2008.  I was actually the first user at our company to have a completely encrypted disk.  My biggest surprise was the lack of noticeable impact on system performance.  My machine (Gateway M680) was running Windows XP and I had 2GB of RAM and a similarly-sized swap space.  Beyond a lot of programming work I do video and audio editing.  I did not notice significant impact on editing and rendering of such projects.

Later in 2008, we launched a proof of concept (POC) project involving team members from across the company (technical and non-technical users).  This test group utilized laptops with fully encrypted drives for several months.  We wanted to assure that we would not have problems with the various software packages that we use. During this time we went through XP service pack releases, major software version upgrades and even a switch of our antivirus solution.  We had no reports of encryption-related issues from any of the participants.

By 2009 we were focused on leveraging full disk encryption on every non-server computer in the company.  It took some time due to two constraints.

First, we needed to rollout a company-wide backup solution (as mentioned in my previous post on full disk encryption, recovery of files from a corrupted encrypted device is nearly impossible).  Second, we needed to work through a variety of scheduling conflicts (we needed physical access to each machine to setup the encryption product) across our decentralized workforce.

(more…)

Full Disk Encryption – Two Out of Three Aren’t Bad

Wednesday, April 14th, 2010

Security is a core interest of mine.  I have written and taught about security for many years; consistently keeping our team focused on secure solutions, and am in pursuit of earning the CISSP certification.  Some aspects of security are hard to make work effectively and other aspects are fairly simple, having more to do with common sense than technical expertise.

In this latter category I would put full disk encryption.  Clearly there are still many companies and individuals who have not embraced this technique.  The barrage of news articles describing lost and stolen computers containing sensitive information on unencrypted hard drives makes this point every day.

This leads me to the question of why people don’t use this technology.  Is it a lack of information, limitations in the available products or something else?  For my part I’ll focus this posting on providing information regarding full disk encryption, based on experience. A future post will describe Blue Slate’s deployment of full disk encryption.

Security focuses on three major concepts, Confidentiality, Integrity and Availability (CIA).  These terms apply across the spectrum of potential security-related issues.  Whether considering the physical environment, hardware, applications or data, there are techniques to protect the CIA within that domain.

(more…)

Privacy Lost – Unmasking Masked Data

Thursday, April 1st, 2010

Privacy is an issue which is consistently in the news.  Large amounts of data are stored by retailers, governments, health care providers, employers and so forth. Much of this data contains personal information.  Keeping that data private has proven itself to be a difficult task.

We have seen numerous examples of unintended data loss (unintended by the company whose systems are stolen or attacked).

We hear about thefts of laptops containing personal information for hundreds of thousands of people.  Internet-based attacks that allow attackers access to financial transaction data and even rogue credit card swiping equipment hidden in gas pumps have become background noise in a sea of leaked data.  This is an area that gets the lion’s share of attention in the media and by security professionals.

Worse than these types of personal data loss, because they are completely preventable, are those that are predicated on a company consciously releasing their customer data.  Such companies always assume that they are not introducing risk, but often they are.  In all cases, if the owner of the data had simply held it internally no privacy loss would have occurred.

There have been cases of personal data loss due to mistakes in judgment.

AOL released a large collection of search data to researchers.  The people releasing the data didn’t consider this a risk to privacy.  How could the search terms entered by anonymous people present a risk to privacy?

Of course we now now know that within the data were people’s social security numbers (SSN), phone numbers, credit card numbers and so forth.  Why?  Well, it turns out that some people will search for those things, quite possibly to prove to themselves that their data is safe.  What better way to see if your SSN or credit card number is published on the Internet than by typing it into a search engine?  No matches, great!

Personal data has even been lost by companies releasing data after attempting to mask or anonymize it.

The intent of masking is to remove enough information, the personally identifying information (PII), so that the data cannot be associated with real people. Of course this has to be done without losing the important details that allow patterns and relationships in the data to be found.

(more…)

At Last, My Web Applications Will Be Totally Secure!?

Saturday, February 27th, 2010

Yet another vendor attempts to reduce application security to something that can be purchased.

How to Hacker-Proof Your Web Applications,” was the amusing subject of an email I received recently.  I’m sure that it wasn’t meant to be amusing.  I suppose I just have a strange sense of humor.

The source of the email was a company that I consider to be reputable, though this could lead me to reconsider that opinion.  I won’t single out the organization since hyperbole apparently continues to be a requirement to sell most anything.

I have to wonder though, does anyone actually read a subject line like that and then open the email fully expecting to be presented with a product or service that does what the subject states?  I certainly hope not.  Let’s explore the meaning of the message and then we’ll see if the email content led me to such a nirvana.

Your Web Applications” covers every piece of software I have that presents a web interface.  This includes my traditional HTTP/HTML-based applications as well as web services.  These applications may be based on a variety of technologies such as .NET, Java, PERL and Ruby.  They include third-party libraries and frameworks.  Further, they are hosted on some form of hardware running some operating system.  Clearly this claim applies to a wide and deep world of application infrastructures and architectures.

Hacker-Proof” means that no attacker will be able to successfully exploit the applications.  That is quite a promise.  By opening this email I’m going to find out what is necessary to prevent all successful exploits for my entire set of web facing applications?  This is great news! (more…)