Bookmark and Share
11Jul/12Off

Scalable SQL – Michael Rys Communications of the ACM, Vol. 54 No. 6, Pages 48-53

One of the leading motivators for NoSQL innovation is the desire to achieve very high scalability to handle the vagaries of Internet-size workloads. Yet many big social Web sites (such as Facebook, MySpace, and Twitter) and many other Web sites and distributed tier 1 applications that require high scalability (such as e-commerce and banking) reportedly remain SQL-based

or their core data stores and services.

The question is, how do they do it?

The main goal of the NoSQL/big data movement is to achieve agility. Among the variety of agility dimensions—such as modelagility (ease and speed of changing data models), operationalagility (ease and speed of changing operational aspects), and programming agility (ease and speed of application development)—one of the most important is the ability to quickly and seamlessly scale an application to accommodate large amounts of data, users, and connections. Scalable architectures are especially important for large distributed applications such as social networking sites, e-commerce Web sites, and point-of-sale/branch infrastructures for more traditional stores and enterprises where the scalability of the application is directly tied to the scalability and success of the business.

These applications have several scalability requirements:

  • Scalability in terms of user load. The application needs to be able to scale to a large number of users, potentially in the millions.
  • Scalability in terms of data load. The application must be able to scale to a large amount of data, potentially in, either produced by a few or produced as the aggregate of many users.
  • Computational scalability. Operations on the data should be able to scale for both an increasing number of users and increasing data sizes.
  • Scale agility. In order to scale to increasing or decreasing application load, the architecture and operational environment should provide the ability to add or remove resources quickly, without application changes or impact on the availability of the application.

Scalable Architectures

Several major architectural approaches achieve high-level scalability. Most of them provide scale-out based on some form of functional and/or data partitioning and distributing the work across many processing nodes.

Functional partitioning often follows the service-oriented paradigm of building the application with several independent services each performing a specific task. This allows the application to scale out by assigning separate resources to these services as needed. Functional scale-out partitioning alone, however, often does not provide enough scalability since the number of tasks is limited and not in direct relationship to the big drivers of scalability requirements: the number of users and size of data. So functional partitioning is often combined with data partitioning.

Data partitioning distributes the application's processing over a set of data partitions. Different forms of data partitioning are deployed based on the topology of the processing nodes and the characteristics of the data. For example, if the user base is geographically dispersed and there is a locality requirement for scalability and performance reasons, such as in worldwide social networking sites, then data often is partitioned according to those geographic boundaries. On the other hand, data may be more randomly partitioned—for example, based on customer IDs—if the scale-out requirements are more constrained by the cost of running data-analysis algorithms over the data. In this case, equal partition sizes are more important.

Once an application is built using a distributed model to achieve scale, it will have to deal with a set of requirements above and beyond simple centralized application structures:

  • Because of the distribution of both data and processing, the database that in a centralized application model would provide a consistent view of the data and transactional execution is now distributed among many databases. Thus, the application (or a middle tier) has to provide an additional transactional/consistency layer to provide consistency across the partitions.
  • In addition, changes to the applications have to be rolled out to all the partitions in a way that will not interfere with the consistency guarantees and requirements of the application. For example, if the application issues distributed queries against a set of tables that are partitioned across several nodes, and the application is updating the schema of some of these distributed tables, then either the schema change needs to be backward-compatible so it can be rolled out locally without affecting the ongoing queries, or the schema must be updated globally, thus impacting the application's availability during the rollout phase.
  • Finally, there is an increased probability of partition node failures and network partitioning. Therefore, nodes need to be made redundant and applications have to be resilient to network partitioning.

Furthermore, all three of these requirements have to be fulfilled without negatively impacting the availability of the application's services, the main reason why the application probably was scaled out in the first place.

In 2000, Eric Brewer made the conjecture that it is impossible for a distributed Web service to provide all three guarantees—consistency, availability, partition tolerance—at the same time. This conjecture is now commonly known as the CAP theorem

and is one of the main arguments why traditional relational database techniques that provide strong ACID guarantees (atomic transactions, transactional consistency and isolation, and data durability) cannot provide both the partition tolerance and availability required by large-scale distributed applications.

So why are many of the leading social networking sites (Facebook, MySpace, Twitter), e-commerce Web sites (hotel reservation systems and shopping sites), and large banking applications still implemented using traditional database systems such as MySQL (Facebook, Twitter) or SQL Server (MySpace, Choice Hotels International, Bank Itau) instead of using the new NoSQL systems?

How Do You Scale Out with SQL?

The high-level answer is that the application architecture is still weighing the same trade-offs required by the CAP theorem. Given that the availability of the application has to be guaranteed for business reasons, and that partition and node failures cannot be excluded, the application architecture has to make some compromises around the level of provided consistency. Note this does not mean that relational databases cannot be used per se; it means the strong consistency guarantee of a single partition node cannot be made across all nodes and that the application architecture cannot use "traditional" database technologies such as distributed querying, full ACID transactions, and synchronous processing of requests without running into availability and scalability issues.

For example, traditional distributed query engines such as Microsoft SQL Server's linked servers assume close coupling of the data sources and are not able to adjust to quickly changing topologies—whether because of nodes being added or because of node failures. They operate synchronously and will wait for nodes to reply or fail the query in case of a node failure, thus impacting availability of the service.

What are some of the ways to build scalable applications using relational database systems as their underlying data stores? Basically the application architectures follow the same service-oriented, functional- and data-partitioning schemes outlined previously. Each leaf partition will be using a relational database, providing local consistency and query processing. To guarantee node availability, each node will be mirrored and made highly available. Depending on the service-level guarantee around failover and read versus update frequency, each mirror will be managed either synchronously or asynchronously.

Global consistency across the many locally consistent nodes will be provided to the level that the application requires, most often relaxing the atomicity, strong consistency, and/or isolation of the global operation. Many techniques exist, such as open nested transaction systems (Saga, multilevel concurrency control) and optimistic concurrency control approaches, and specific partitioning and application logics to reduce the risk of inconsistencies. For example, open nested multilevel transactions relax transactional isolation by allowing certain local changes to become globally visible before the global transaction commits. This increases transactional throughput at the risk of potentially costly compensation work when a global transaction and its impact have to be undone. Thus, the openness often is restricted to specific operations that are commutative and have a clearly defined compensating action. In practice, such advanced transaction models have not yet been widely used, even though some transaction managers provide them.

More frequently, the application partitions data in a first step to avoid local conflicts and then uses optimistic approaches that assume that conflicts rarely occur. This approach takes into account the idea that most people are in fact fine with eventually consistent states of the global data.

Accepting short-term "incorrect" global states and results is actually pretty common in our day-to-day lives. Even bank transactions are often "eventually consistent." For example, redeeming a check or settling an investment transaction will not be fully consistent at a global level at the time the transaction is executed. The money will potentially go into the seller's account before it gets deducted from the buyer's account, but there is a guarantee that the money will eventually be deducted and the global state will become consistent.

Using eventual consistency is a more complex application design paradigm than assuming a globally consistent state at all times. The programmer has to determine the acceptable level of inconsistencies—how long the data can be kept in an inconsistent state. The platform provider has to design the system in a way that programmers can easily understand the possible inconsistencies and provide mechanisms to handle them when they appear. Often the agility and scalability gains are worth the additional complexity of the application architecture.


Besides providing a scalable architecture, Service Broker provides a communication fabric guaranteeing that messages to a service are delivered reliably, in order, and exactly once.


Using eventual consistency as an acceptable global consistency guarantee also allows the application to provide availability during network failures and thus achieve higher scalability. On the one hand, updating a node that has become unavailable will no longer block or fail the global transaction, as long as the system can guarantee that it will eventually be updated. On the other hand, eventual consistency allows the application to operate on older data and still provide useful results; sometimes it even allows partial results if a node cannot be queried (although this is a decision the application has to make). It also means that the architecture can be built using asynchronous services that will provide for higher scalability because the functional services and individual data partitions can do their work without blocking the application.

An Example of How to Scale with SQL

As we already mentioned, several applications with high scalability requirements are being built on top of traditional relational database systems. For example, Twitter uses the NoSQL database Cassandra for some of its needs, but its core database system that manages tweets is still using the MySQL relational database system.

The following example presents a high-level overview of how MySpace achieves scalability of its architecture using Microsoft SQL Server. MySpace is still one of the largest social networking sites. In 2009 it used 440 SQL Server instances to manage 130 million users and one petabyte of data with 4.4 million concurrently active users at peak time.

As outlined earlier, MySpace has chosen to use both functional and data partitioning. Data partitions are geographically distributed to be closer to the users in an area, as well as becoming further partitioned by user IDs for scale. This makes sense since most users will want to access their own data most frequently. Obviously, since MySpace is a social networking site where individual users connect and leave messages and comments, operations not only target a single partition, but also need to update data across partitions. Given the large demands on availability and scalability, MySpace needs to achieve a balance between scale and correctness.

The basic approach is to perform most of the work in an asynchronous fashion. The asynchronous processing of the change events and interactions with the application provides high availability, and by having the partitions operate on the queued requests in a uniform fashion, the system is able to scale out easily. Using a reliable message infrastructure provides the guarantee that the changes eventually become visible, thus delivering eventual correctness.

Figure 3 provides a high-level abstraction of MySpace's service dispatcher architecture. It consists of a few dozen request routers that dispatch incoming requests to perform a certain user or system action—for example, posting a comment on a friend's picture, submitting a blog entry, or a system request such as deploying a new schema object. During steady state, the request routers are exact copies of each other, including a routing table mapping services to partitions.

The requests are asynchronously distributed across the routers and get dispatched to the individual account partitions (around 440 in the case of MySpace) and the requested service endpoint. Note that the account partitions provide all the same services and schemata at steady state, thus guaranteeing that every service can be provided by every node without being dependent on any other node.

Each of the routers and each of the partitions and services are implemented using SQL Server and SQL Server Service Broker. Service Broker is the key ingredient that enables this architecture to work reliably and efficiently. It provides the asynchronous messaging capabilities that allow the requests to flow at a high rate between the services. Each service exposes a queue to accept requests and the ability to dispatch workers on each item in the queue. Service Broker, like other service-bus and asynchronous messaging components, allows scaling out by simply adding multiple instances of the same service across different partitions. Requests are load balanced across these service instances without having to change the application logic. An interesting difference to some of the other message buses such as MQSeries, RabbitMQ, NServiceBus, and Microsoft Message Queuing (MSMQ) is that Service Broker is deeply built into the database engine.

Besides providing a scalable architecture, Service Broker provides a communication fabric guaranteeing messages to a service are delivered reliably, in order, and exactly once. This guarantees that even in case of a network partition or a node failure, a message is not lost but will eventually be delivered once the node has been reconnected. Since every service will be performed by the database server, local consistency is provided at the level specified for the specific transaction. The use of Service Broker to build and scale the services will provide global eventual consistency.

The availability of each partition can be improved by providing a failover copy using database mirroring. If a failover occurs, the Service Broker connection also automatically and transparently fails over.

The application scale-out architecture as described avoids a single point of failure and contention by replicating all the routing information across all the request routers and providing the services on all partitions. The asynchronous processing using Service Broker provides scalability, as well as eventual consistency. The architecture, services, and partitioning, however, will evolve over time. Therefore, the changes to the routing information when data gets repartitioned and the updates to services and schemas also need to be maintained in a scalable way. It would not be good if a global lock is being taken across all the request routers when adding a new partition to the routing table.

To address this, the current architecture uses the same Service Broker-based approach to roll out changes to the services and schemas. A repartition of the account services will be updated asynchronously. To detect a change in the partition by a router before its routing table has been updated, the partitions will fail a request if the partition assumption is invalid and will provide updated information back to the router, which then retries the request based on the new routing information.

A similar architecture is also being used for several e-commerce Web sites that build on relational databases. For example, Bank Itau provides a scalable branch banking system and Choice Hotels International has a highly scalable online hotel reservation system using asynchronous messaging.

Summary and Outlook

Building scalable database applications is not necessarily a question of whether one should use a relational database system or a NoSQL system. It is more a question of choosing the right application architecture that is agile enough to scale. For example, combining asynchronous messaging with a relational database system provides the powerful infrastructure that enables developers to build highly scalable, agile applications that provide partition tolerance and availability while providing a high level of eventual consistency.

Scale-out applications with SQL are being built using similar architectural principles as scale-out applications using NoSQL while providing more mature infrastructure for declarative query processing, optimizations, indexing, and data storage/high availability. In addition, scaling out an existing SQL application without having to replace the data tier with a different database system that has different configuration, management, and troubleshooting requirements is very appealing.

Other aspects such as data models, agility requirements, query optimization, data-processing logic, existing infrastructures, and individual capabilities, strengths, and weaknesses will have to be considered as well when deciding between a SQL and NoSQL database system. Discussing these aspects are unfortunately outside the scope of this article.


All database systems, be they relational or NosQL, still need to provide additional services that make it easier for the developer to build massively scalable applications.


All database systems, however, whether relational or NoSQL, still need to provide additional services that make it easier for the developer to build massively scalable applications. For example, relational database systems should add integrated support for data-partitioning scale-out such as sharding. NoSQL databases are working on providing more of the traditional database capabilities such as secondary indices, declarative query languages, among others.

Until the database systems provide simple-to-use scale-out services, developers will have to design their applications with scale-out in mind and use more generic application patterns such as asynchronous messaging, functional and data partitioning, and fault tolerance to build fault-resilient systems that provide high availability and scalability.

30Mar/11Off

The ‘Microsoft.Jet.OLEDB.4.0′ provider is not registered on the local machine

Today I got this error "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine"  when running my ASP.NET web based application on my new laptop with Windows 7 64 bit. It was a form that allows user to upload excel file, then I read the data from excel file into data table object. It used to work in my old laptop, but not in my new one :( .

I have googled around for this issue, have found solution for it. So I took a note here for people who might face same problem as me.

Steps to fix this problem

  • Open the IIS Manager
  • Click on Application Pools
  • Select your web application pool
  • Click on the link "Set Application Pool Defaults" on the right hand side
  • In General area, turn on "enable 32-bit applications"
Microsoft Jet OLEDB provider is not registered

Microsoft Jet OLEDB provider is not registered

26Jun/10Off

Optimize caching – Page Speed

Optimize caching

Most web pages include resources that change infrequently, such as CSS files, image files, JavaScript files, and so on. These resources take time to download over the network, which increases the time it takes to load a web page. HTTP caching allows these resources to be saved, or cached, by a browser or proxy. Once a resource is cached, a browser or proxy can refer to the locally cached copy instead of having to download it again on subsequent visits to the web page. Thus caching is a double win: you reduce round-trip time by eliminating numerous HTTP requests for the required resources, and you substantially reduce the total payload size of the responses. Besides leading to a dramatic reduction in page load time for subsequent user visits, enabling caching can also significantly reduce the bandwidth and hosting costs for your site.

  1. Leverage browser caching
  2. Leverage proxy caching

Leverage browser caching

Overview

Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network.

Details

HTTP/S supports local caching of static resources by the browser. Some of the newest browsers (e.g. IE 7, Chrome) use a heuristic to decide how long to cache all resources that don't have explicit caching headers. Other older browsers may require that caching headers be set before they will fetch a resource from the cache; and some may never cache any resources sent over SSL.

To take advantage of the full benefits of caching consistently across all browsers, we recommend that you configure your web server to explicitly set caching headers and apply them to all cacheable static resources, not just a small subset (such as images). Cacheable resources include JS and CSS files, image files, and other binary object files (media files, PDFs, Flash files, etc.). In general, HTML is not static, and shouldn't be considered cacheable.

HTTP/1.1 provides the following caching response headers :

  • Expires and Cache-Control: max-age. These specify the “freshness lifetime” of a resource, that is, the time period during which the browser can use the cached resource without checking to see if a new version is available from the web server. They are "strong caching headers" that apply unconditionally; that is, once they're set and the resource is downloaded, the browser will not issue any GET requests for the resource until the expiry date or maximum age is reached.
  • Last-Modified and ETag. These specify some characteristic about the resource that the browser checks to determine if the files are the same. In theLast-Modified header, this is always a date. In the ETag header, this can be any value that uniquely identifies a resource (file versions or content hashes are typical). Last-Modified is a "weak" caching header in that the browser applies a heuristic to determine whether to fetch the item from cache or not. (The heuristics are different among different browsers.) However, these headers allow the browser to efficiently update its cached resources by issuing conditional GET requests when the user explicitly reloads the page. Conditional GETs don't return the full response unless the resource has changed at the server, and thus have lower latency than full GETs.

It is important to specify one of Expires or Cache-Control max-ageand one of Last-Modified or ETag, for all cacheable resources. It is redundant to specify both Expires and Cache-Control: max-age, or to specify both Last-Modified and ETag.

Recommendations

Set caching headers aggressively for all static resources.
For all cacheable resources, we recommend the following settings:

  • Set Expires to a minimum of one month, and preferably up to one year, in the future. (We prefer Expires over Cache-Control: max-agebecause it is is more widely supported.) Do not set it to more than one year in the future, as that violates the RFC guidelines.If you know exactly when a resource is going to change, setting a shorter expiration is okay. But if you think it "might change soon" but don't know when, you should set a long expiration and use URL fingerprinting (described below). Setting caching aggressively does not "pollute" browser caches: as far as we know, all browsers clear their caches according to a Least Recently Used algorithm; we are not aware of any browsers that wait until resources expire before purging them.
  • Set the Last-Modified date to the last time the resource was changed. If the Last-Modified date is sufficiently far enough in the past, chances are the browser won't refetch it.
Use fingerprinting to dynamically enable caching.
For resources that change occasionally, you can have the browser cache the resource until it changes on the server, at which point the server tells the browser that a new version is available. You accomplish this by embedding a fingerprint of the resource in its URL (i.e. the file path). When the resource changes, so does its fingerprint, and in turn, so does its URL. As soon as the URL changes, the browser is forced to re-fetch the resource. Fingerprinting allows you to set expiry dates long into the future even for resources that change more frequently than that. Of course, this technique requires that all of the pages that reference the resource know about the fingerprinted URL, which may or may not be feasible, depending on how your pages are coded.
Set the Vary header correctly for Internet Explorer.
Internet Explorer does not cache any resources that are served with the Vary header and any fields but Accept-Encoding and User-Agent. To ensure these resources are cached by IE, make sure to strip out any other fields from the Vary header, or remove the Vary header altogether if possible
Avoid URLs that cause cache collisions in Firefox.
The Firefox disk cache hash functions can generate collisions for URLs that differ only slightly, namely only on 8-character boundaries. When resources hash to the same key, only one of the resources is persisted to disk cache; the remaining resources with the same key have to be re-fetched across browser restarts. Thus, if you are using fingerprinting or are otherwise programmatically generating file URLs, to maximize cache hit rate, avoid the Firefox hash collision issue by ensuring that your application generates URLs that differ on more than 8-character boundaries.
Use the Cache control: public directive to enable HTTPS caching for Firefox.
Some versions of Firefox require that the Cache control: public header to be set in order for resources sent over SSL to be cached on disk, even if the other caching headers are explicitly set. Although this header is normally used to enable caching by proxy servers (as described below), proxies cannot cache any content sent over HTTPS, so it is always safe to set this header for HTTPS resources.

Example

For the stylesheet used to display the user's calendar after login, Google Calendar embeds a fingerprint in its filename: calendar/static/fingerprint_keydoozercompiled.css, where the fingerprint key is a 128-bit hexadecimal number. At the time of the screen shot below (taken from Page Speed's Show Resources panel), the fingerprint was set to 82b6bc440914c01297b99b4bca641a5d:

he fingerprinting mechanism allows the server to set the Expires header to exactly one year ahead of the request date; the Last-Modified header to the date the file was last modified; and the Cache-Control: max-age header to 3153600. To cause the client to re-download the file in case it changes before its expiry date or maximum age, the fingerprint (and therefore the URL) changes whenever the file's content does.

Additional resources

Leverage proxy caching

Overview

Enabling public caching in the HTTP headers for static resources allows the browser to download resources from a nearby proxy server rather than from a remoter origin server.

Details

In addition to browser caching, HTTP provides for proxy caching, which enables static resources to be cached on public web proxy servers, most notably those used by ISPs. This means that even first-time users to your site can benefit from caching: once a static resource has been requested by one user through the proxy, that resource is available for all other users whose requests go through that same proxy. Since those locations are likely to be in closer network proximity to your users than your servers, proxy caching can result in a significant reduction in network latency. Also, if enabled proxy caching effectively gives you free web site hosting, since responses served from proxy caches don't draw on your servers' bandwidth at all.

You use the Cache-control: public header to indicate that a resource can be cached by public web proxies in addition to the browser that issued the request. With some exceptions (described below), you should configure your web server to set this header to public for cacheable resources.

Recommendations

Don't include a query string in the URL for static resources.
Most proxies, most notably Squid up through version 3.0, do not cache resources with a "?" in their URL even if a Cache-control: public header is present in the response. To enable proxy caching for these resources, remove query strings from references to static resources, and instead encode the parameters into the file names themselves.
Don't enable proxy caching for resources that set cookies.
Setting the header to public effectively shares resources among multiple users, which means that any cookies set for those resources are shared as well. While many proxies won't actually cache any resources with cookie headers set, it's better to avoid the risk altogether. Either set the Cache-Controlheader to private or serve these resources from a cookieless domain.
Be aware of issues with proxy caching of JS and CSS files.
Some public proxies have bugs that do not detect the presence of the Content-Encoding response header. This can result in compressed versions being delivered to client browsers that cannot properly decompress the files. Since these files should always be gzipped by your server, to ensure that the client can correctly read the files, do either of the following:

  • Set the the Cache-Control header to private. This disables proxy caching altogether for these resources. If your application is multi-homed around the globe and relies less on proxy caches for user locality, this might be an appropriate setting.
  • Set the Vary: Accept-Encoding response header. This instructs the proxies to cache two versions of the resource: one compressed, and one uncompressed. The correct version of the resource is delivered based on the client request header. This is a good choice for applications that are singly homed and depend on public proxies for user locality.
Source from: http://code.google.com/speed/page-speed/docs/caching.html
16Jun/10Off

IIS Manager Error: The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0×80070020)

Install IIS 7

1. To open the Windows Features dialog box, click Start, and then click Control Panel.

2. In the Control Panel, click Programs.

3. Click Turn Windows features on or off.

4. You may receive the Windows Security warning. Click Allow to continue. The Windows Features dialog box is displayed

5. Expand Internet Information Services. Additional categories of IIS features are displayed. Select Internet Information Services to choose the default features for installation.

6. Expand the additional categories displayed, and select any additional features you want to install, such as Web Management Tools.

7. If you are installing IIS 7 for evaluation purposes, you may want to select additional features to install. Select the check boxes for all IIS features you want to install, and then click OK to start installation.

8. IIS 7 is now installed with a default configuration on Windows Vista or Windows 7. To confirm that the installation succeeded, type the following URL into your browser, http://localhost.

9. Next, you can use Internet Information Services Manager to manage and configure IIS. To open IIS Manager, click Start, type inetmgr in the Search Programs and Files box, and then press ENTER.

10. Now in IIS, you may get an error message that is something like "The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020)"

Solution is: you stop the skype program and restart the IIS. it was solved on my laptop (windows 7 home). So I took note it here for every body may get the same problem as me.

Hope it can help!

Tagged as: , 3 Comments