Bookmark and Share

Why It Is Necessary To Get Your Website Scanned For Security

Consider this; in one of the largest web based data theft cases, 45 million credit cards were affected. In another case over 40,000 USD mysteriously disappeared from two bank accounts, thanks to a hacker group. In yet another case, when clients of ABC Company and other prospective customers tried to login to company’s website, they were redirected to a competitors page, and the company loses the customer! Apart from direct losses, there are many other indirect losses. According to a whitehat security report in 2008, 9 out of 10 websites are vulnerable or already infected by one or more security threats.
It has become increasingly important to make your website secure, and the first step towards making it secure is to get a thorough website security scan.

Below are the characteristics of a good website scan tool:
• Should be designed to understand the business/website model.
• Should be able to evaluate website security risks.
• Should be updated regularly.
• Should have easy to use website scan dashboard.
• Should provide detailed technical information.
• Should provide easy to understand reports.
• Should provide detailed risk analysis.
• Should provide feasible technical solutions.

There are many benefits that you can reap using these security tools:
• Keep data secure from any infections.
• Increase website availability.
• Gain visitor trust by providing a safe and secure environment.
• Proactive approach against security threats.
• A security assurance seal from a reputed website security provider helps you enhance brand reputation.
• Websites that are more secure gain higher rankings in search engines.


Optimize caching – Page Speed

Optimize caching

Most web pages include resources that change infrequently, such as CSS files, image files, JavaScript files, and so on. These resources take time to download over the network, which increases the time it takes to load a web page. HTTP caching allows these resources to be saved, or cached, by a browser or proxy. Once a resource is cached, a browser or proxy can refer to the locally cached copy instead of having to download it again on subsequent visits to the web page. Thus caching is a double win: you reduce round-trip time by eliminating numerous HTTP requests for the required resources, and you substantially reduce the total payload size of the responses. Besides leading to a dramatic reduction in page load time for subsequent user visits, enabling caching can also significantly reduce the bandwidth and hosting costs for your site.

  1. Leverage browser caching
  2. Leverage proxy caching

Leverage browser caching


Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network.


HTTP/S supports local caching of static resources by the browser. Some of the newest browsers (e.g. IE 7, Chrome) use a heuristic to decide how long to cache all resources that don't have explicit caching headers. Other older browsers may require that caching headers be set before they will fetch a resource from the cache; and some may never cache any resources sent over SSL.

To take advantage of the full benefits of caching consistently across all browsers, we recommend that you configure your web server to explicitly set caching headers and apply them to all cacheable static resources, not just a small subset (such as images). Cacheable resources include JS and CSS files, image files, and other binary object files (media files, PDFs, Flash files, etc.). In general, HTML is not static, and shouldn't be considered cacheable.

HTTP/1.1 provides the following caching response headers :

  • Expires and Cache-Control: max-age. These specify the “freshness lifetime” of a resource, that is, the time period during which the browser can use the cached resource without checking to see if a new version is available from the web server. They are "strong caching headers" that apply unconditionally; that is, once they're set and the resource is downloaded, the browser will not issue any GET requests for the resource until the expiry date or maximum age is reached.
  • Last-Modified and ETag. These specify some characteristic about the resource that the browser checks to determine if the files are the same. In theLast-Modified header, this is always a date. In the ETag header, this can be any value that uniquely identifies a resource (file versions or content hashes are typical). Last-Modified is a "weak" caching header in that the browser applies a heuristic to determine whether to fetch the item from cache or not. (The heuristics are different among different browsers.) However, these headers allow the browser to efficiently update its cached resources by issuing conditional GET requests when the user explicitly reloads the page. Conditional GETs don't return the full response unless the resource has changed at the server, and thus have lower latency than full GETs.

It is important to specify one of Expires or Cache-Control max-ageand one of Last-Modified or ETag, for all cacheable resources. It is redundant to specify both Expires and Cache-Control: max-age, or to specify both Last-Modified and ETag.


Set caching headers aggressively for all static resources.
For all cacheable resources, we recommend the following settings:

  • Set Expires to a minimum of one month, and preferably up to one year, in the future. (We prefer Expires over Cache-Control: max-agebecause it is is more widely supported.) Do not set it to more than one year in the future, as that violates the RFC guidelines.If you know exactly when a resource is going to change, setting a shorter expiration is okay. But if you think it "might change soon" but don't know when, you should set a long expiration and use URL fingerprinting (described below). Setting caching aggressively does not "pollute" browser caches: as far as we know, all browsers clear their caches according to a Least Recently Used algorithm; we are not aware of any browsers that wait until resources expire before purging them.
  • Set the Last-Modified date to the last time the resource was changed. If the Last-Modified date is sufficiently far enough in the past, chances are the browser won't refetch it.
Use fingerprinting to dynamically enable caching.
For resources that change occasionally, you can have the browser cache the resource until it changes on the server, at which point the server tells the browser that a new version is available. You accomplish this by embedding a fingerprint of the resource in its URL (i.e. the file path). When the resource changes, so does its fingerprint, and in turn, so does its URL. As soon as the URL changes, the browser is forced to re-fetch the resource. Fingerprinting allows you to set expiry dates long into the future even for resources that change more frequently than that. Of course, this technique requires that all of the pages that reference the resource know about the fingerprinted URL, which may or may not be feasible, depending on how your pages are coded.
Set the Vary header correctly for Internet Explorer.
Internet Explorer does not cache any resources that are served with the Vary header and any fields but Accept-Encoding and User-Agent. To ensure these resources are cached by IE, make sure to strip out any other fields from the Vary header, or remove the Vary header altogether if possible
Avoid URLs that cause cache collisions in Firefox.
The Firefox disk cache hash functions can generate collisions for URLs that differ only slightly, namely only on 8-character boundaries. When resources hash to the same key, only one of the resources is persisted to disk cache; the remaining resources with the same key have to be re-fetched across browser restarts. Thus, if you are using fingerprinting or are otherwise programmatically generating file URLs, to maximize cache hit rate, avoid the Firefox hash collision issue by ensuring that your application generates URLs that differ on more than 8-character boundaries.
Use the Cache control: public directive to enable HTTPS caching for Firefox.
Some versions of Firefox require that the Cache control: public header to be set in order for resources sent over SSL to be cached on disk, even if the other caching headers are explicitly set. Although this header is normally used to enable caching by proxy servers (as described below), proxies cannot cache any content sent over HTTPS, so it is always safe to set this header for HTTPS resources.


For the stylesheet used to display the user's calendar after login, Google Calendar embeds a fingerprint in its filename: calendar/static/fingerprint_keydoozercompiled.css, where the fingerprint key is a 128-bit hexadecimal number. At the time of the screen shot below (taken from Page Speed's Show Resources panel), the fingerprint was set to 82b6bc440914c01297b99b4bca641a5d:

he fingerprinting mechanism allows the server to set the Expires header to exactly one year ahead of the request date; the Last-Modified header to the date the file was last modified; and the Cache-Control: max-age header to 3153600. To cause the client to re-download the file in case it changes before its expiry date or maximum age, the fingerprint (and therefore the URL) changes whenever the file's content does.

Additional resources

Leverage proxy caching


Enabling public caching in the HTTP headers for static resources allows the browser to download resources from a nearby proxy server rather than from a remoter origin server.


In addition to browser caching, HTTP provides for proxy caching, which enables static resources to be cached on public web proxy servers, most notably those used by ISPs. This means that even first-time users to your site can benefit from caching: once a static resource has been requested by one user through the proxy, that resource is available for all other users whose requests go through that same proxy. Since those locations are likely to be in closer network proximity to your users than your servers, proxy caching can result in a significant reduction in network latency. Also, if enabled proxy caching effectively gives you free web site hosting, since responses served from proxy caches don't draw on your servers' bandwidth at all.

You use the Cache-control: public header to indicate that a resource can be cached by public web proxies in addition to the browser that issued the request. With some exceptions (described below), you should configure your web server to set this header to public for cacheable resources.


Don't include a query string in the URL for static resources.
Most proxies, most notably Squid up through version 3.0, do not cache resources with a "?" in their URL even if a Cache-control: public header is present in the response. To enable proxy caching for these resources, remove query strings from references to static resources, and instead encode the parameters into the file names themselves.
Don't enable proxy caching for resources that set cookies.
Setting the header to public effectively shares resources among multiple users, which means that any cookies set for those resources are shared as well. While many proxies won't actually cache any resources with cookie headers set, it's better to avoid the risk altogether. Either set the Cache-Controlheader to private or serve these resources from a cookieless domain.
Be aware of issues with proxy caching of JS and CSS files.
Some public proxies have bugs that do not detect the presence of the Content-Encoding response header. This can result in compressed versions being delivered to client browsers that cannot properly decompress the files. Since these files should always be gzipped by your server, to ensure that the client can correctly read the files, do either of the following:

  • Set the the Cache-Control header to private. This disables proxy caching altogether for these resources. If your application is multi-homed around the globe and relies less on proxy caches for user locality, this might be an appropriate setting.
  • Set the Vary: Accept-Encoding response header. This instructs the proxies to cache two versions of the resource: one compressed, and one uncompressed. The correct version of the resource is delivered based on the client request header. This is a good choice for applications that are singly homed and depend on public proxies for user locality.
Source from:

Learning to Use Regular Expressions

Sob story: This page seems to be quite widely read, but only just occasionally gets donations on the above Paypal link. Tragedy of the commons and all that... but still, if any of you would like to donate a buck or two, I'd appreciate it?
Anyway, this tutorial was first published by IBM developerWorks. This version contains a few minor corrections that readers have suggested since the original publication. An expanded and updated version can be found in my book, Text Processing in Python

Matching Patterns in Text: The Basics

Character literals


Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.


Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.
The very simplest pattern matched by a regular expression is a literal character or a sequence of literal characters. Anything in the target text that consists of exactly those characters in exactly the order listed will match. A lower case character is not identical with its upper case version, and vice versa. A space in a regular expression, by the way, matches a literal space in the target (this is unlike most programming languages or command-line tools, where spaces separate keywords).

"Escaped" characters literals


Special characters must be escaped.*

Special characters must be escaped.*
A number of characters have special meanings to regular expressions. A symbol with a special meaning can be matched, but to do so you must prefix it with the backslash character (this includes the backslash character itself: to match one backslash in the target, your regular expression should include "\\").

Positional special characters


Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.


Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.
Two special characters are used in almost all regular expression tools to mark the beginning and end of a line: caret (^) and dollarsign ($). To match a caret or dollarsign as a literal character, you must escape it (i.e. precede it by a backslash "\").

An interesting thing about the caret and dollarsign is that they match zero-width patterns. That is the length of the string matched by a caret or dollarsign by itself is zero (but the rest of the regular expression can still depend on the zero-width match). Many regular expression tools provide another zero-width pattern for word-boundary (\b). Words might be divided by whitespace like spaces, tabs, newlines, or other characters like nulls; the word-boundary pattern matches the actual point where a word starts or ends, not the particular whitespace characters.

The "wildcard" character


Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.
In regular expressions, a period can stand for any character. Normally, the newline character is not included, but most tools have optional switches to force inclusion of the newline character also. Using a period in a pattern is a way of requiring that "something" occurs here, without having to decide what.

Users who are familiar with DOS command-line wildcards will know the question-mark as filling the role of "some character" in command masks. But in regular expressions, the question-mark has a different meaning, and the period is used as a wildcard.

Grouping regular expressions

/(Mary)( )(had)/ 

Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.
A regular expression can have literal characters in it, and also zero-width positional patterns. Each literal character or positional pattern is an atom in a regular expression. You may also group several atoms together into a small regular expression that is part of a larger regular expression. One might be inclined to call such a grouping a "molecule," but normally it is also called an atom.

In older Unix-oriented tools like grep, subexpressions must be grouped with escaped parentheses, e.g. /\(Mary\)/. In Perl and most more recent tools (including egrep), grouping is done with bare parentheses, but matching a literal parenthesis requires escaping it in the pattern (the example to the side follows Perl).

Character classes


Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.
Rather than name only a single character, you can include a pattern in a regular expression that matches any of a set of characters.

A set of characters can be given as a simple list inside square brackets, e.g. /[aeiou]/ will match any single lowercase vowel. For letter or number ranges you may also use only the first and last letter of a range, with a dash in the middle, e.g. /[A-Ma-m]/ will match any lowercase or uppercase in the first half of the alphabet.

Many regular expression tools also provide escape-style shortcuts to the most commonly used character class, such as \s for a whitespace character and \d for a digit. You could always define these character classes with square brackets, but the shortcuts can make regular expressions more compact and more readable.

Complement operator


Mary had a little lamb.
And everywhere that Mary
went, the lamb was sure
to go.
The caret symbol can actually have two different meanings in regular expressions. Most of the time, it means to match the zero-length pattern for line beginnings. But if it is used at the beginning of a character class, it reverses the meaning of the character class. Everything not included in the listed character set is matched.

Alternation of patterns


The pet store sold cats, dogs, and birds.


=first first= # =second second= # =first= # =second=


=first first= # =second second= # =first= # =second=


=first first= # =second second= # =first= # =second=
Using character classes is a way of indicating that either one thing or another thing can occur in a particular spot. But what if you want to specify that either of two whole subexpressions occur in a position in the regular expression? For that, you use the alternation operator, the vertical bar ("|"). This is the symbol that is also used to indicate a pipe in Unix/DOS shells, and is sometimes called the pipe character.

The pipe character in a regular expression indicates an alternation between everything in the group enclosing it. What this means is that even if there are several groups to the left and right of a pipe character, the alternation greedily asks for everything on both sides. To select the scope of the alternation, you must define a group that encompasses the patterns that may match. The example illustrates this.

The basic abstract quantifier


Match with zero in the middle: @@
Subexpresion occurs, but...: @=+=ABC@
Lots of occurrences: @=+==+==+==+==+=@
Must repeat entire pattern: @=+==+=+==+=@
One of the most powerful and common things you can do with regular expressions is to specify how many times an atom occurs in a complete regular expression. Sometimes you want to specify something about the occurrence of a single character, but very often you are interested in specifying the occurrence of a character class or a grouped subexpression.

There is only one quantifier included with "basic" regular expression syntax, the asterisk ("*"); in English this has the meaning "some or none" or "zero or more." If you want to specify that any number of an atom may occur as part of a pattern, follow the atom by an asterisk.

Without quantifiers, grouping expressions doesn't really serve as much purpose, but once we can add a quantifier to a subexpression we can say something about the occurrence of the subexpression as a whole. Take a look at the example.

Matching Patterns in Text: Intermediate

More abstract quantifiers


In a certain way, the lack of any quantifier symbol after an atom quantifies the atom anyway: it says the atom occurs exactly once. Extended regular expressions (which most tools support) add a few other useful numbers to "once exactly" and "zero or more times." The plus-sign ("+") means "one or more times" and the question-mark ("?") means "zero or one times." These quantifiers are by far the most common enumerations you wind up naming.

If you think about it, you can see that the extended regular expressions do not actually let you "say" anything the basic ones do not. They just let you say it in a shorter and more readable way. For example, "(ABC)+" is equivalent to "(ABC)(ABC)*"; and "X(ABC)?Y" is equivalent to "XABCY|XY". If the atoms being quantified are themselves complicated grouped subexpressions, the question-mark and plus-sign can make things a lot shorter.

Numeric quantifiers

/a{5} b{,6} c{4,8}/

aaaaa bbbbb ccccc
aaa bbb ccc
aaaaa bbbbbbbbbbbbbb ccccc

/a+ b{3,} c?/

aaaaa bbbbb ccccc
aaa bbb ccc
aaaaa bbbbbbbbbbbbbb ccccc

/a{5} b{6,} c{4,8}/

aaaaa bbbbb ccccc
aaa bbb ccc
aaaaa bbbbbbbbbbbbbb ccccc
Using extended regular expressions, you can specify arbitrary pattern occurrence counts using a more verbose syntax than the question-mark, plus-sign, and asterisk quantifiers. The curly-braces ("{" and "}") can surround a precise count of how many occurrences you are looking for.

The most general form of the curly-brace quantification uses two range arguments (the first must be no larger than the second, and both must be non-negative integers). The occurrence count is specified this way to fall between the minimum and maximum indicated (inclusive). As shorthand, either argument may be left empty: if so the minimum/maximum is specified as zero/infinity, respectively. If only one argument is used (with no comma in there), exactly that number of occurrences are matched.


/(abc|xyz) \1/

jkl abc xyz
jkl xyz abc
jkl abc abc
jkl xyz xyz

/(abc|xyz) (abc|xyz)/

jkl abc xyz
jkl xyz abc
jkl abc abc
jkl xyz xyz
One powerful option in creating search patterns is specifying that a subexpression that was matched earlier in a regular expression is matched again later in the expression. We do this using backreferences. Backreferences are named by the numbers 1 through 9, preceded by the backslash/escape character when used in this manner. These backreferences refer to each successive group in the match pattern, as in /(one)(two)(three)/\1\2\3/. Each numbered backreference refers to the group that, in this example, has the word corresponding to the number.

It is important to note something the example illustrates. What gets matched by a backreference is the same literal string matched the first time, even if the pattern that matched the string could have matched other strings. Simply repeating the same grouped subexpression later in the regular expression does not match the same targets as using a backreference (but you have to decide what it is you actually want to match in either case).

Backreferences refer back to whatever occurred in the previous grouped expressions, in the order those grouped expressions occurred. Because of the naming convention (\1-\9), many tools limit you to nine backreferences. Some tools allow actual naming of backreferences and/or saving them to program variables. The more advanced parts of this tutorial touch on these topics

Don't match more than you want to


-- I want to match the words that start
-- with 'th' and end with 's'.
this line matches too much
Quantifiers in regular expressions are greedy. That is, they match as much as they possibly can.

Probably the easiest mistake to make in composing regular expressions is to match too much. When you use a quantifier, you want it to match everything (of the right sort) up to the point where you want to finish your match. But when using the "*", "+" or numeric quantifiers, it is easy to forget that the last bit you are looking for might occur later in a line than the one you are interested in.

Tricks for restraining matches


-- I want to match the words that start
-- with 'th' and end with 's'.
this line matches too much
Often if you find that your regular expressions are matching too much, a useful procedure is to reformulate the problem in your mind. Rather than thinking about "what am I trying to match later in the expression?" ask yourself "what do I need to avoid matching in the next part?" Often this leads to more parsimonious pattern matches. Often the way to avoid a pattern is to use the complement operator and a character class. Look at the example, and think about how it works.

The trick here is that there are two different ways of formulating almost the same sequence. You can either think you want to keep matching until you get to XYZ, or you can think you want to keep matching unless you get to XYZ. These are subtly different.

For people who have thought about basic probability, the same pattern occurs. The chance of rolling a 6 on a die in one roll is 1/6. What is the chance of rolling a 6 in six rolls? A naive calculation puts the odds at 1/6+1/6+1/6+1/6+1/6+1/6, or 100%. This is wrong, of course (after all, the chance after twelve rolls isn't 200%). The correct calculation is "how do I avoid rolling a 6 for six rolls?" -- i.e. 5/6*5/6*5/6*5/6*5/6*5/6, or about 33%. The chance of getting a 6 is the same chance as not avoiding it (or about 66%). In fact, if you imagine transcribing a series of dice rolls, you could apply a regular expression to the written record, and similar thinking applies.

Comments on modification tools

Not all tools that use regular expressions allow you to modify target strings. Some simply locate the matched pattern; the mostly widely used regular expression tool is probably grep, which is a tool for searching only. Text editors, for example, may or may not allow replacement in their regular expression search facility. As always, consult the documentation on your individual tool.

Of the tools that allow you to modify target text, there are a few differences to keep in mind. The way you actually specify replacements will vary between tools: a text editor might have a dialog box; command-line tools will use delimiters between match and replacement, programming languages will typically call functions with arguments for match and replacement patterns.

Another important difference to keep in mind is of what is getting modified. Unix-oriented command-line tools typically utilize pipes and STDOUT for changes to buffers, rather than modify files in-place. Using a sed command, for example, will write the modifications to the console, but will not change the original target file. Text editors or programming languages are more likely to actually modify a file in-place.

A note on modification examples

For purposes of this tutorial, examples will continue to use the sed style slash delimiters. Specifically, the examples will indicate the substitution command and the global modifier, as with "s/this/that/g". This expression means: "Replace the string 'this' with the string 'that' everywhere in the target text.

Examples will consist of the modification command, an input line, and an output line. The output line will have any changes emphasized. Also, each input/output line will be preceded by a less-than or greater-than symbol to help distinguish them (the order will be as described also), which is suggestive of redirection symbols in Unix shells.

A literal-string modification example


< The zoo had wild dogs, bobcats, lions, and other wild cats.
> The zoo had wild dogs, bobdogs, lions, and other wild dogs.
Let us take a look at a couple modification examples that build on what we have already covered. This one simply substitutes some literal text for some other literal text. The search-and-replace capability of many tools can do this much, even without using regular expressions.

A pattern-match modification example


< The zoo had wild dogs, bobcats, lions, and other wild cats.
> The zoo had wild snakes, bobsnakes, lions, and other wild snakes.


< The zoo had wild dogs, bobcats, lions, and other wild cats.
> The zoo had nice dogs, bobcats, nice, and other nice cats.
Most of the time, if you are using regular expressions to modify a target text, you will want to match more general patterns than just literal strings. Whatever is matched is what gets replaced (even if it is several different strings in the target)

Modification using backreferences

s/([A-Z])([0-9]{2,4}) /\2:\1 /g 

< A37 B4 C107 D54112 E1103 XXX
> 37:A B4 107:C D54112 1103:E XXX
It is nice to be able to insert a fixed string everywhere a pattern occurs in a target text. But frankly, doing that is not very context sensitive. A lot of times, we do not want just to insert fixed strings, but rather to insert something that bears much more relation to the matched patterns. Fortunately, backreferences come to our rescue here. You can use backreferences in the pattern-matches themselves, but it is even more useful to be able to use them in replacement patterns. By using replacement backreferences, you can pick and choose from the matched patterns to use just the parts you are interested in.

To aid readability, subexpressions will be grouped with bare parentheses (as with Perl), rather than with escaped parentheses (as with sed).

Another warning on mismatching

This tutorial has already warned about the danger of matching too much with your regular expression patterns. But the danger is so much more serious when you do modifications, that it is worth repeating. If you replace a pattern that matches a larger string than you thought of when you composed the pattern, you have potentially deleted some important data from your target.

It is always a good idea to try out your regular expressions on diverse target data that is representative of your production usage. Make sure you are matching what you think you are matching. A stray quantifier or wildcard can make a surprisingly wide variety of texts match what you thought was a specific pattern. An sometimes you just have to stare at your pattern for a while, or find another set of eyes, to figure out what is really going on even after you see what matches. Familiarity might breed contempt, but it also instills competence.

Advanced Regular Expression Extensions

About advanced features

Some very useful enhancements are included in some regular expression tools. These enhancements often make the composition and maintenance of regular expression considerably easier. But check with your own tool to see what is supported.

The programming language Perl is probably the most sophisticated tool for regular expression processing, which explains much of its popularity. The examples illustrated will use Perl-ish code to explain concepts. Other programming languages, especially other scripting languages such as Python, have a similar range of enhancements. But for purposes of illustration, Perl's syntax most closely mirrors the regular expression tools it builds on, such as ed, ex, grep, sed, and awk.

Non-greedy quantifiers


-- I want to match the words that start
-- with 'th' and end with 's'.
this line matches just right
this # thus # thistle


-- I want to match the words that start
-- with 'th' and end with 's'.
this # thus # thistle
this line matches just right

/th.*?s /

-- I want to match the words that start
-- with 'th' and end with 's'. (FINALLY!)
this # thus # thistle
this line matches just right
Earlier in the tutorial, the problems of matching too much were discussed, and some workarounds were suggested. Some regular expression tools are nice enough to make this easier by providing optional non-greedy quantifiers. These quantifier grab as little as possible while still matching whatever comes next in the pattern (instead of as much as possible).

Non-greedy quantifiers have the same syntax as regular greedy ones, except with the quantifier followed by a question-mark. For example, a non-greedy pattern might look like: "/A[A-Z]*?B/". In English, this means "match an A, followed by only as many capital letters as are needed to find a B."

One little thing to look out for is the fact that the pattern "/[A-Z]*?./" will always match zero capital letters. If you use non-greedy quantifiers, watch out for matching too little, which is a symmetric danger.

Pattern-match modifiers

/M.*[ise] /

MAINE # Massachusetts # Colorado #
mississippi # Missouri # Minnesota #

/M.*[ise] /i

MAINE # Massachusetts # Colorado #
mississippi # Missouri # Minnesota #

/M.*[ise] /gis

MAINE # Massachusetts # Colorado #
mississippi # Missouri # Minnesota #
We already saw one pattern-match modifier in the modification examples: the global modifier. In fact, in many regular expression tools, we should have been using the "g" modifier for all our pattern matches. Without the "g", many tools will match only the first occurrence of a pattern on a line in the target. So this is a useful modifier (but not one you necessarily want to use always). Let us look at some others.

As a little mnemonic, it is nice to remember the word "gismo" (it even seems somehow appropriate). The most frequent modifiers are:

  • g - Match globally
  • i - Case-insensitive match
  • s - Treat string as single line
  • m - Treat string as multiple lines
  • o - Only compile pattern once

The o option is an implementation optimization, and not really a regular expression issue (but it helps the mnemonic). The single-line option allows the wildcard to match a newline character (it won't otherwise). The multiple-line option causes "^" and "$" to match the begin and end of each line in the target, not just the begin/end of the target as a whole (with sed or grep this is the default). The insensitive option ignores differences between case of letters.

Changing backreference behavior


< A-xyz-37 # B:abcd:142 # C-wxy-66 # D-qrs-93
> A37 # B:abcd:42 # C66 # D93
Backreferencing in replacement patterns is very powerful; but it is also easy to use more than nine groups in a complex regular expression. Quite apart from using up the available backreference names, it is often more legible to refer to the parts of a replacement pattern in sequential order. To handle this issue, some regular expression tools allow "grouping without backreferencing."

A group that should not also be treated as a back reference has a question-mark colon at the beginning of the group, as in "(?:pattern)." In fact, you can use this syntax even when your backreferences are in the search pattern itself.

Naming backreferences

import re
txt = "A-xyz-37 # B:abcd:142 # C-wxy-66 # D-qrs-93"
print re.sub("(?P<prefix>[A-Z])(-[a-z]{3}-)(?P<id>[0-9]*)",
             "\g<prefix>\g<id>", txt) 

A37 # B:abcd:42 # C66 # D93
The language Python offers a particularly handy syntax for really complex pattern backreferences. Rather than just play with the numbering of matched groups, you can give them a name.

The syntax of using regular expressions in Python is a standard programming language function/method style of call, rather than Perl- or sed-style slash delimiters. Check your own tool to see if it supports this facility.

Lookahead assertions

s/([A-Z]-)(?=[a-z]{3})([a-z0-9]* )/\2\1/g

< A-xyz37 # B-ab6142 # C-Wxy66 # D-qrs93
> xyz37A- # B-ab6142 # C-Wxy66 # qrs93D-

s/([A-Z]-)(?![a-z]{3})([a-z0-9]* )/\2\1/g

< A-xyz37 # B-ab6142 # C-Wxy66 # D-qrs93
> A-xyz37 # ab6142B- # Wxy66C- # D-qrs93
Another trick of advanced regular expression tools is "lookahead assertions." These are similar to regular grouped subexpression, except they do not actually grab what they match. There are two advantages to using lookahead assertions. On the one hand, a lookahead assertion can function in a similar way to a group that is not backreferenced; that is, you can match something without counting it in backreferences. More significantly, however, a lookahead assertion can specify that the next chunk of a pattern has a certain form, but let a different subexpression actually grab it (usually for purposes of backreferencing that other subexpression.

There are two kinds of lookahead assertions: positive and negative. As you would expect, a positive assertion specifies that something does come next, and a negative one specifies that something does not come next. Emphasizing their connection with non-backreferenced groups, the syntax for lookahead assertions is similar: (?=pattern) for positive assertions, and (?!pattern) for negative assertions.

Making regular expressions more readable

/               # identify URLs within a text file
          [^="] # do not match URLs in IMG tags like:
                # <img src="">
http|ftp|gopher # make sure we find a resource type
          :\/\/ # ...needs to be followed by colon-slash-slash
      [^ \n\r]+ # stuff other than space, newline, tab is in URL
    (?=[\s\.,]) # assert: followed by whitespace/period/comma

The URL for my site is:  You
might also enjoy for a good
place to download files.
In the later examples we have started to see just how complicated regular expressions can get. These examples are not the half of it. It is possible to do some almost absurdly difficult-to-understand things with regular expression (but ones that are nonetheless useful).

There are two basic facilities that some of the more advanced regular expression tools use in clarifying expressions. One is allowing regular expressions to continue over multiple lines (by ignoring whitespace like trailing spaces and newlines). The second is allowing comments within regular expressions. Some tools allow you to do one or another of these things alone, but when it gets complicated, do both!

The example given uses Perl's extend modifier to enable commented multi-line regular expressions. Consult the documentation for your own tool for details on how to compose these.