Bookmark and Share
11Jul/12Off

Scalable SQL – Michael Rys Communications of the ACM, Vol. 54 No. 6, Pages 48-53

One of the leading motivators for NoSQL innovation is the desire to achieve very high scalability to handle the vagaries of Internet-size workloads. Yet many big social Web sites (such as Facebook, MySpace, and Twitter) and many other Web sites and distributed tier 1 applications that require high scalability (such as e-commerce and banking) reportedly remain SQL-based

or their core data stores and services.

The question is, how do they do it?

The main goal of the NoSQL/big data movement is to achieve agility. Among the variety of agility dimensions—such as modelagility (ease and speed of changing data models), operationalagility (ease and speed of changing operational aspects), and programming agility (ease and speed of application development)—one of the most important is the ability to quickly and seamlessly scale an application to accommodate large amounts of data, users, and connections. Scalable architectures are especially important for large distributed applications such as social networking sites, e-commerce Web sites, and point-of-sale/branch infrastructures for more traditional stores and enterprises where the scalability of the application is directly tied to the scalability and success of the business.

These applications have several scalability requirements:

  • Scalability in terms of user load. The application needs to be able to scale to a large number of users, potentially in the millions.
  • Scalability in terms of data load. The application must be able to scale to a large amount of data, potentially in, either produced by a few or produced as the aggregate of many users.
  • Computational scalability. Operations on the data should be able to scale for both an increasing number of users and increasing data sizes.
  • Scale agility. In order to scale to increasing or decreasing application load, the architecture and operational environment should provide the ability to add or remove resources quickly, without application changes or impact on the availability of the application.

Scalable Architectures

Several major architectural approaches achieve high-level scalability. Most of them provide scale-out based on some form of functional and/or data partitioning and distributing the work across many processing nodes.

Functional partitioning often follows the service-oriented paradigm of building the application with several independent services each performing a specific task. This allows the application to scale out by assigning separate resources to these services as needed. Functional scale-out partitioning alone, however, often does not provide enough scalability since the number of tasks is limited and not in direct relationship to the big drivers of scalability requirements: the number of users and size of data. So functional partitioning is often combined with data partitioning.

Data partitioning distributes the application's processing over a set of data partitions. Different forms of data partitioning are deployed based on the topology of the processing nodes and the characteristics of the data. For example, if the user base is geographically dispersed and there is a locality requirement for scalability and performance reasons, such as in worldwide social networking sites, then data often is partitioned according to those geographic boundaries. On the other hand, data may be more randomly partitioned—for example, based on customer IDs—if the scale-out requirements are more constrained by the cost of running data-analysis algorithms over the data. In this case, equal partition sizes are more important.

Once an application is built using a distributed model to achieve scale, it will have to deal with a set of requirements above and beyond simple centralized application structures:

  • Because of the distribution of both data and processing, the database that in a centralized application model would provide a consistent view of the data and transactional execution is now distributed among many databases. Thus, the application (or a middle tier) has to provide an additional transactional/consistency layer to provide consistency across the partitions.
  • In addition, changes to the applications have to be rolled out to all the partitions in a way that will not interfere with the consistency guarantees and requirements of the application. For example, if the application issues distributed queries against a set of tables that are partitioned across several nodes, and the application is updating the schema of some of these distributed tables, then either the schema change needs to be backward-compatible so it can be rolled out locally without affecting the ongoing queries, or the schema must be updated globally, thus impacting the application's availability during the rollout phase.
  • Finally, there is an increased probability of partition node failures and network partitioning. Therefore, nodes need to be made redundant and applications have to be resilient to network partitioning.

Furthermore, all three of these requirements have to be fulfilled without negatively impacting the availability of the application's services, the main reason why the application probably was scaled out in the first place.

In 2000, Eric Brewer made the conjecture that it is impossible for a distributed Web service to provide all three guarantees—consistency, availability, partition tolerance—at the same time. This conjecture is now commonly known as the CAP theorem

and is one of the main arguments why traditional relational database techniques that provide strong ACID guarantees (atomic transactions, transactional consistency and isolation, and data durability) cannot provide both the partition tolerance and availability required by large-scale distributed applications.

So why are many of the leading social networking sites (Facebook, MySpace, Twitter), e-commerce Web sites (hotel reservation systems and shopping sites), and large banking applications still implemented using traditional database systems such as MySQL (Facebook, Twitter) or SQL Server (MySpace, Choice Hotels International, Bank Itau) instead of using the new NoSQL systems?

How Do You Scale Out with SQL?

The high-level answer is that the application architecture is still weighing the same trade-offs required by the CAP theorem. Given that the availability of the application has to be guaranteed for business reasons, and that partition and node failures cannot be excluded, the application architecture has to make some compromises around the level of provided consistency. Note this does not mean that relational databases cannot be used per se; it means the strong consistency guarantee of a single partition node cannot be made across all nodes and that the application architecture cannot use "traditional" database technologies such as distributed querying, full ACID transactions, and synchronous processing of requests without running into availability and scalability issues.

For example, traditional distributed query engines such as Microsoft SQL Server's linked servers assume close coupling of the data sources and are not able to adjust to quickly changing topologies—whether because of nodes being added or because of node failures. They operate synchronously and will wait for nodes to reply or fail the query in case of a node failure, thus impacting availability of the service.

What are some of the ways to build scalable applications using relational database systems as their underlying data stores? Basically the application architectures follow the same service-oriented, functional- and data-partitioning schemes outlined previously. Each leaf partition will be using a relational database, providing local consistency and query processing. To guarantee node availability, each node will be mirrored and made highly available. Depending on the service-level guarantee around failover and read versus update frequency, each mirror will be managed either synchronously or asynchronously.

Global consistency across the many locally consistent nodes will be provided to the level that the application requires, most often relaxing the atomicity, strong consistency, and/or isolation of the global operation. Many techniques exist, such as open nested transaction systems (Saga, multilevel concurrency control) and optimistic concurrency control approaches, and specific partitioning and application logics to reduce the risk of inconsistencies. For example, open nested multilevel transactions relax transactional isolation by allowing certain local changes to become globally visible before the global transaction commits. This increases transactional throughput at the risk of potentially costly compensation work when a global transaction and its impact have to be undone. Thus, the openness often is restricted to specific operations that are commutative and have a clearly defined compensating action. In practice, such advanced transaction models have not yet been widely used, even though some transaction managers provide them.

More frequently, the application partitions data in a first step to avoid local conflicts and then uses optimistic approaches that assume that conflicts rarely occur. This approach takes into account the idea that most people are in fact fine with eventually consistent states of the global data.

Accepting short-term "incorrect" global states and results is actually pretty common in our day-to-day lives. Even bank transactions are often "eventually consistent." For example, redeeming a check or settling an investment transaction will not be fully consistent at a global level at the time the transaction is executed. The money will potentially go into the seller's account before it gets deducted from the buyer's account, but there is a guarantee that the money will eventually be deducted and the global state will become consistent.

Using eventual consistency is a more complex application design paradigm than assuming a globally consistent state at all times. The programmer has to determine the acceptable level of inconsistencies—how long the data can be kept in an inconsistent state. The platform provider has to design the system in a way that programmers can easily understand the possible inconsistencies and provide mechanisms to handle them when they appear. Often the agility and scalability gains are worth the additional complexity of the application architecture.


Besides providing a scalable architecture, Service Broker provides a communication fabric guaranteeing that messages to a service are delivered reliably, in order, and exactly once.


Using eventual consistency as an acceptable global consistency guarantee also allows the application to provide availability during network failures and thus achieve higher scalability. On the one hand, updating a node that has become unavailable will no longer block or fail the global transaction, as long as the system can guarantee that it will eventually be updated. On the other hand, eventual consistency allows the application to operate on older data and still provide useful results; sometimes it even allows partial results if a node cannot be queried (although this is a decision the application has to make). It also means that the architecture can be built using asynchronous services that will provide for higher scalability because the functional services and individual data partitions can do their work without blocking the application.

An Example of How to Scale with SQL

As we already mentioned, several applications with high scalability requirements are being built on top of traditional relational database systems. For example, Twitter uses the NoSQL database Cassandra for some of its needs, but its core database system that manages tweets is still using the MySQL relational database system.

The following example presents a high-level overview of how MySpace achieves scalability of its architecture using Microsoft SQL Server. MySpace is still one of the largest social networking sites. In 2009 it used 440 SQL Server instances to manage 130 million users and one petabyte of data with 4.4 million concurrently active users at peak time.

As outlined earlier, MySpace has chosen to use both functional and data partitioning. Data partitions are geographically distributed to be closer to the users in an area, as well as becoming further partitioned by user IDs for scale. This makes sense since most users will want to access their own data most frequently. Obviously, since MySpace is a social networking site where individual users connect and leave messages and comments, operations not only target a single partition, but also need to update data across partitions. Given the large demands on availability and scalability, MySpace needs to achieve a balance between scale and correctness.

The basic approach is to perform most of the work in an asynchronous fashion. The asynchronous processing of the change events and interactions with the application provides high availability, and by having the partitions operate on the queued requests in a uniform fashion, the system is able to scale out easily. Using a reliable message infrastructure provides the guarantee that the changes eventually become visible, thus delivering eventual correctness.

Figure 3 provides a high-level abstraction of MySpace's service dispatcher architecture. It consists of a few dozen request routers that dispatch incoming requests to perform a certain user or system action—for example, posting a comment on a friend's picture, submitting a blog entry, or a system request such as deploying a new schema object. During steady state, the request routers are exact copies of each other, including a routing table mapping services to partitions.

The requests are asynchronously distributed across the routers and get dispatched to the individual account partitions (around 440 in the case of MySpace) and the requested service endpoint. Note that the account partitions provide all the same services and schemata at steady state, thus guaranteeing that every service can be provided by every node without being dependent on any other node.

Each of the routers and each of the partitions and services are implemented using SQL Server and SQL Server Service Broker. Service Broker is the key ingredient that enables this architecture to work reliably and efficiently. It provides the asynchronous messaging capabilities that allow the requests to flow at a high rate between the services. Each service exposes a queue to accept requests and the ability to dispatch workers on each item in the queue. Service Broker, like other service-bus and asynchronous messaging components, allows scaling out by simply adding multiple instances of the same service across different partitions. Requests are load balanced across these service instances without having to change the application logic. An interesting difference to some of the other message buses such as MQSeries, RabbitMQ, NServiceBus, and Microsoft Message Queuing (MSMQ) is that Service Broker is deeply built into the database engine.

Besides providing a scalable architecture, Service Broker provides a communication fabric guaranteeing messages to a service are delivered reliably, in order, and exactly once. This guarantees that even in case of a network partition or a node failure, a message is not lost but will eventually be delivered once the node has been reconnected. Since every service will be performed by the database server, local consistency is provided at the level specified for the specific transaction. The use of Service Broker to build and scale the services will provide global eventual consistency.

The availability of each partition can be improved by providing a failover copy using database mirroring. If a failover occurs, the Service Broker connection also automatically and transparently fails over.

The application scale-out architecture as described avoids a single point of failure and contention by replicating all the routing information across all the request routers and providing the services on all partitions. The asynchronous processing using Service Broker provides scalability, as well as eventual consistency. The architecture, services, and partitioning, however, will evolve over time. Therefore, the changes to the routing information when data gets repartitioned and the updates to services and schemas also need to be maintained in a scalable way. It would not be good if a global lock is being taken across all the request routers when adding a new partition to the routing table.

To address this, the current architecture uses the same Service Broker-based approach to roll out changes to the services and schemas. A repartition of the account services will be updated asynchronously. To detect a change in the partition by a router before its routing table has been updated, the partitions will fail a request if the partition assumption is invalid and will provide updated information back to the router, which then retries the request based on the new routing information.

A similar architecture is also being used for several e-commerce Web sites that build on relational databases. For example, Bank Itau provides a scalable branch banking system and Choice Hotels International has a highly scalable online hotel reservation system using asynchronous messaging.

Summary and Outlook

Building scalable database applications is not necessarily a question of whether one should use a relational database system or a NoSQL system. It is more a question of choosing the right application architecture that is agile enough to scale. For example, combining asynchronous messaging with a relational database system provides the powerful infrastructure that enables developers to build highly scalable, agile applications that provide partition tolerance and availability while providing a high level of eventual consistency.

Scale-out applications with SQL are being built using similar architectural principles as scale-out applications using NoSQL while providing more mature infrastructure for declarative query processing, optimizations, indexing, and data storage/high availability. In addition, scaling out an existing SQL application without having to replace the data tier with a different database system that has different configuration, management, and troubleshooting requirements is very appealing.

Other aspects such as data models, agility requirements, query optimization, data-processing logic, existing infrastructures, and individual capabilities, strengths, and weaknesses will have to be considered as well when deciding between a SQL and NoSQL database system. Discussing these aspects are unfortunately outside the scope of this article.


All database systems, be they relational or NosQL, still need to provide additional services that make it easier for the developer to build massively scalable applications.


All database systems, however, whether relational or NoSQL, still need to provide additional services that make it easier for the developer to build massively scalable applications. For example, relational database systems should add integrated support for data-partitioning scale-out such as sharding. NoSQL databases are working on providing more of the traditional database capabilities such as secondary indices, declarative query languages, among others.

Until the database systems provide simple-to-use scale-out services, developers will have to design their applications with scale-out in mind and use more generic application patterns such as asynchronous messaging, functional and data partitioning, and fault tolerance to build fault-resilient systems that provide high availability and scalability.

14May/12Off

Why It Is Necessary To Get Your Website Scanned For Security

Consider this; in one of the largest web based data theft cases, 45 million credit cards were affected. In another case over 40,000 USD mysteriously disappeared from two bank accounts, thanks to a hacker group. In yet another case, when clients of ABC Company and other prospective customers tried to login to company’s website, they were redirected to a competitors page, and the company loses the customer! Apart from direct losses, there are many other indirect losses. According to a whitehat security report in 2008, 9 out of 10 websites are vulnerable or already infected by one or more security threats.
It has become increasingly important to make your website secure, and the first step towards making it secure is to get a thorough website security scan.

Below are the characteristics of a good website scan tool:
• Should be designed to understand the business/website model.
• Should be able to evaluate website security risks.
• Should be updated regularly.
• Should have easy to use website scan dashboard.
• Should provide detailed technical information.
• Should provide easy to understand reports.
• Should provide detailed risk analysis.
• Should provide feasible technical solutions.

There are many benefits that you can reap using these security tools:
• Keep data secure from any infections.
• Increase website availability.
• Gain visitor trust by providing a safe and secure environment.
• Proactive approach against security threats.
• A security assurance seal from a reputed website security provider helps you enhance brand reputation.
• Websites that are more secure gain higher rankings in search engines.

30Mar/11Off

The ‘Microsoft.Jet.OLEDB.4.0′ provider is not registered on the local machine

Today I got this error "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine"  when running my ASP.NET web based application on my new laptop with Windows 7 64 bit. It was a form that allows user to upload excel file, then I read the data from excel file into data table object. It used to work in my old laptop, but not in my new one :( .

I have googled around for this issue, have found solution for it. So I took a note here for people who might face same problem as me.

Steps to fix this problem

  • Open the IIS Manager
  • Click on Application Pools
  • Select your web application pool
  • Click on the link "Set Application Pool Defaults" on the right hand side
  • In General area, turn on "enable 32-bit applications"
Microsoft Jet OLEDB provider is not registered

Microsoft Jet OLEDB provider is not registered

23Jan/11Off

Web.config/Machine.config Optimal Settings For ASP.NET website

These are what we have collected from different sources as our experiences during our life as ASP.NET web developer :) . More update will come...stay in touch with us...

For production websites, it’s important to remember to set the <Compilation debug =”false” /> setting in Web.config. This ensures no unnecessary debug code is generated for release version of website. If you are not using some of the ASP.NET modules such as Windows Authentication or Passport Authentication etc. then these can be removed from the ASP.NET processing pipeline as they will be unnecessarily loaded otherwise. Below is an example of some modules that could be removed from the pipeline:

<httpModules>
<remove name="WindowsAuthentication"/>
<remove name="PassportAuthentication"/>
<remove name="AnonymousIdentification"/>
<remove name="UrlAuthorization"/>
<remove name="FileAuthorization"/>
<remove name="OutputCache"/>
</httpModules>
ASP.NET Process Model configuration defines some process level properties like how many number of threads ASP.NET uses, how long it blocks a thread before timing out, how many requests to keep waiting for IO works to complete and so on. With fast servers with a lot of RAM, the process model configuration can be tweaked to make ASP.NET process consume more system resources and provide better scalability from each server. The below settings can help performance (a cut down version from an excellent article here):
<processModel
enable="true"
timeout="Infinite"
idleTimeout="Infinite"
shutdownTimeout="00:00:05"
requestLimit="Infinite"
requestQueueLimit="5000"
restartQueueLimit="10"
memoryLimit="60"
responseDeadlockInterval="00:03:00"
responseRestartDeadlockInterval="00:03:00"
maxWorkerThreads="100"
maxIoThreads="100"
minWorkerThreads="40"
minIoThreads="30"
asyncOption="20"
maxAppDomains="2000"
/>

Collected from: Code Project

Minimize Calls to DataBinder.Eval (Collected from MSDN)

The DataBinder.Eval method uses reflection to evaluate the arguments that are passed in and to return the results. If you have a table that has 100 rows and 10 columns, you callDataBinder.Eval 1,000 times if you use DataBinder.Eval on each column. Your choice to use DataBinder.Eval is multiplied 1,000 times in this scenario. Limiting the use ofDataBinder.Eval during data binding operations significantly improves page performance. Consider the following ItemTemplate element within a Repeater control usingDataBinder.Eval.

<ItemTemplate>
  <tr>
    <td><%# DataBinder.Eval(Container.DataItem,"field1") %></td>
    <td><%# DataBinder.Eval(Container.DataItem,"field2") %></td>
  </tr>
</ItemTemplate>

There are alternatives to using DataBinder.Eval in this scenario. The alternatives include the following:

Use explicit casting. Using explicit casting offers better performance by avoiding the cost of reflection. Cast the Container.DataItem as a DataRowView.

<%# ((DataRowView)Container.DataItem)["field1"] %>

You can gain even better performance with explicit casting if you use a DataReader to bind your control and use the specialized methods to retrieve your data. Cast theContainer.DataItem as a DbDataRecord.

<ItemTemplate>
  <tr>
     <td><%# ((DbDataRecord)Container.DataItem).GetString(0) %></td>
     <td><%# ((DbDataRecord)Container.DataItem).GetInt(1) %></td>
  </tr>
</ItemTemplate>

The explicit casting depends on the type of data source you are binding to; the preceding code illustrates an example.

Use the ItemDataBound event. If the record that is being data bound contains many fields, it may be more efficient to use the ItemDataBound event. By using this event, you only perform the type conversion once. The following sample uses a DataSet object.

protected void Repeater_ItemDataBound(Object sender, RepeaterItemEventArgs e)
{
  DataRowView drv = (DataRowView)e.Item.DataItem;
  Response.Write(string.Format("<td>{0}</td>",drv["field1"]));
  Response.Write(string.Format("<td>{0}</td>",drv["field2"]));
  Response.Write(string.Format("<td>{0}</td>",drv["field3"]));
  Response.Write(string.Format("<td>{0}</td>",drv["field4"]));
}
Filed under: ASP.NET No Comments
16Dec/10Off

Ajax: A New Approach to Web Applications

If anything about current interaction design can be called “glamorous,” it’s creating Web applications. After all, when was the last time you heard someone rave about the interaction design of a product that wasn’t on the Web? (Okay, besides the iPod.) All the cool, innovative new projects are online.

Despite this, Web interaction designers can’t help but feel a little envious of our colleagues who create desktop software. Desktop applications have a richness and responsiveness that has seemed out of reach on the Web. The same simplicity that enabled the Web’s rapid proliferation also creates a gap between the experiences we can provide and the experiences users can get from a desktop application.
That gap is closing. Take a look at Google Suggest. Watch the way the suggested terms update as you type, almost instantly. Now look at Google Maps. Zoom in. Use your cursor to grab the map and scroll around a bit. Again, everything happens almost instantly, with no waiting for pages to reload.

Google Suggest and Google Maps are two examples of a new approach to web applications that we at Adaptive Path have been calling Ajax. The name is shorthand for Asynchronous JavaScript + XML, and it represents a fundamental shift in what’s possible on the Web.
Defining Ajax
Ajax isn’t a technology. It’s really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

  • standards-based presentation using XHTML and CSS;
  • dynamic display and interaction using the Document Object Model;
  • data interchange and manipulation using XML and XSLT;
  • and JavaScript binding everything together.

The classic web application model works like this: Most user actions in the interface trigger an HTTP request back to a web server. The server does some processing — retrieving data, crunching numbers, talking to various legacy systems — and then returns an HTML page to the client. It’s a model adapted from the Web’s original use as a hypertext medium, but as fans of The Elements of User Experience know, what makes the Web good for hypertext doesn’t necessarily make it good for software applications.

This approach makes a lot of technical sense, but it doesn’t make for a great user experience. While the server is doing its thing, what’s the user doing? That’s right, waiting. And at every step in a task, the user waits some more.

Obviously, if we were designing the Web from scratch for applications, we wouldn’t make users wait around. Once an interface is loaded, why should the user interaction come to a halt every time the application needs something from the server? In fact, why should the user see the application go to the server at all?
How Ajax is Different
An Ajax application eliminates the start-stop-start-stop nature of interaction on the Web by introducing an intermediary — an Ajax engine — between the user and the server. It seems like adding a layer to the application would make it less responsive, but the opposite is true.

Instead of loading a webpage, at the start of the session, the browser loads an Ajax engine — written in JavaScript and usually tucked away in a hidden frame. This engine is responsible for both rendering the interface the user sees and communicating with the server on the user’s behalf. The Ajax engine allows the user’s interaction with the application to happen asynchronously — independent of communication with the server. So the user is never staring at a blank browser window and an hourglass icon, waiting around for the server to do something.

Every user action that normally would generate an HTTP request takes the form of a JavaScript call to the Ajax engine instead. Any response to a user action that doesn’t require a trip back to the server — such as simple data validation, editing data in memory, and even some navigation — the engine handles on its own. If the engine needs something from the server in order to respond — if it’s submitting data for processing, loading additional interface code, or retrieving new data — the engine makes those requests asynchronously, usually using XML, without stalling a user’s interaction with the application.

Who’s Using Ajax

Google is making a huge investment in developing the Ajax approach. All of the major products Google has introduced over the last year — Orkut, Gmail, the latest beta version of Google Groups, Google Suggest, and Google Maps — are Ajax applications. (For more on the technical nuts and bolts of these Ajax implementations, check out these excellent analyses of Gmail, Google Suggest, and Google Maps.) Others are following suit: many of the features that people love in Flickr depend on Ajax, and Amazon’s A9.com search engine applies similar techniques.

These projects demonstrate that Ajax is not only technically sound, but also practical for real-world applications. This isn’t another technology that only works in a laboratory. And Ajax applications can be any size, from the very simple, single-function Google Suggest to the very complex and sophisticated Google Maps.

At Adaptive Path, we’ve been doing our own work with Ajax over the last several months, and we’re realizing we’ve only scratched the surface of the rich interaction and responsiveness that Ajax applications can provide. Ajax is an important development for Web applications, and its importance is only going to grow. And because there are so many developers out there who already know how to use these technologies, we expect to see many more organizations following Google’s lead in reaping the competitive advantage Ajax provides.

Moving Forward

The biggest challenges in creating Ajax applications are not technical. The core Ajax technologies are mature, stable, and well understood. Instead, the challenges are for the designers of these applications: to forget what we think we know about the limitations of the Web, and begin to imagine a wider, richer range of possibilities.

It’s going to be fun.

Ajax Q&A

March 13, 2005: Since we first published Jesse’s essay, we’ve received an enormous amount of correspondence from readers with questions about Ajax. In this Q&A, Jesse responds to some of the most common queries.

Q. Did Adaptive Path invent Ajax? Did Google? Did Adaptive Path help build Google’s Ajax applications?

A. Neither Adaptive Path nor Google invented Ajax. Google’s recent products are simply the highest-profile examples of Ajax applications. Adaptive Path was not involved in the development of Google’s Ajax applications, but we have been doing Ajax work for some of our other clients.

Q. Is Adaptive Path selling Ajax components or trademarking the name? Where can I download it?

A. Ajax isn’t something you can download. It’s an approach — a way of thinking about the architecture of web applications using certain technologies. Neither the Ajax name nor the approach are proprietary to Adaptive Path.

Q. Is Ajax just another name for XMLHttpRequest?

A. No. XMLHttpRequest is only part of the Ajax equation. XMLHttpRequest is the technical component that makes the asynchronous server communication possible; Ajax is our name for the overall approach described in the article, which relies not only on XMLHttpRequest, but on CSS, DOM, and other technologies.

Q. Why did you feel the need to give this a name?

A. I needed something shorter than “Asynchronous JavaScript+CSS+DOM+XMLHttpRequest” to use when discussing this approach with clients.

Q. Techniques for asynchronous server communication have been around for years. What makes Ajax a “new” approach?

A. What’s new is the prominent use of these techniques in real-world applications to change the fundamental interaction model of the Web. Ajax is taking hold now because these technologies and the industry’s understanding of how to deploy them most effectively have taken time to develop.

Q. Is Ajax a technology platform or is it an architectural style?

A. It’s both. Ajax is a set of technologies being used together in a particular way.

Q. What kinds of applications is Ajax best suited for?

A. We don’t know yet. Because this is a relatively new approach, our understanding of where Ajax can best be applied is still in its infancy. Sometimes the traditional web application model is the most appropriate solution to a problem.

Q. Does this mean Adaptive Path is anti-Flash?

A. Not at all. Macromedia is an Adaptive Path client, and we’ve long been supporters of Flash technology. As Ajax matures, we expect that sometimes Ajax will be the better solution to a particular problem, and sometimes Flash will be the better solution. We’re also interested in exploring ways the technologies can be mixed (as in the case of Flickr, which uses both).

Q. Does Ajax have significant accessibility or browser compatibility limitations? Do Ajax applications break the back button? Is Ajax compatible with REST? Are there security considerations with Ajax development? Can Ajax applications be made to work for users who have JavaScript turned off?

A. The answer to all of these questions is “maybe”. Many developers are already working on ways to address these concerns. We think there’s more work to be done to determine all the limitations of Ajax, and we expect the Ajax development community to uncover more issues like these along the way.

Q. Some of the Google examples you cite don’t use XML at all. Do I have to use XML and/or XSLT in an Ajax application?

A. No. XML is the most fully-developed means of getting data in and out of an Ajax client, but there’s no reason you couldn’t accomplish the same effects using a technology like JavaScript Object Notation or any similar means of structuring data for interchange.

Q. Are Ajax applications easier to develop than traditional web applications?

A. Not necessarily. Ajax applications inevitably involve running complex JavaScript code on the client. Making that complex code efficient and bug-free is not a task to be taken lightly, and better development tools and frameworks will be needed to help us meet that challenge.

Q. Do Ajax applications always deliver a better experience than traditional web applications?

A. Not necessarily. Ajax gives interaction designers more flexibility. However, the more power we have, the more caution we must use in exercising it. We must be careful to use Ajax to enhance the user experience of our applications, not degrade it.

To get essays like this one delivered directly to your inbox, subscribe to our email newsletter.

Jesse James Garrett is the Director of User Experience Strategy and a founder of Adaptive Path. He is the author of the widely-referenced book The Elements of User Experience.

Other essays by Jesse James Garrett include The Nine Pillars of Successful Web Teams and Six Design Lessons From the Apple Store.

(Collected from Adaptive Path)

Filed under: AJAX, ASP.NET No Comments
8Aug/10Off

The Benefits of Outsourcing Difficult Web Development Projects

he website development world has tremendously expanded over the past five years, and it doesn´t seem to be shrinking. If you´ve been in the web development field any length of time, you probably realize how much this field has grown and changed.

Simple HTML for many website developers is a thing of the past. Who would have thought that so many programming languages would be invented in such a short period? From simple HTML to PHP, XHTML, CSS, JAVA, FLASH, etc., etc., the list goes on and on!

Outsourcing, although not new to the business world, has quickly become a familiar term in the web development field for this very reason.

Your Niche Web Design Skills

Each website designer has an area of expertise that they´re very familiar with and have performed often. If you took website development in college, you probably focused on just a couple of aspects of development. No one college class teaches all the methods. Perhaps you are very savvy with flash and have created many websites using flash. Of course, there are plenty of potential web design customers in the Flash department, but what if you could expand to other areas of design that you know little about?

Expanding without Additional Training

Perhaps you´ve had the terrible experience of turning clients away simply because you didn´t have the skills necessary to complete their web project. If you´ve had to do this, don´t be too hard on yourself. Nobody has every skill in website development, but everyone in website development has at least one or a few skills!

You might ask, “How can I expand my business or extend services in areas where I´m not an expert?” As mentioned above, outsourcing has become a familiar term in the web development world for this very purpose. Outsourcing gives you an opportunity to expand your business without going to school for more training.

How Outsourcing Works

Outsourcing simply means that you are hiring a person who´s not a direct employee (and often lives out of the country) to complete a web design project for you. Instead of simply referring your client to a new company or individual, you´re actually becoming the middle man between your client and the skilled designer. You still make a profit, the person hired gets paid for their work, the client gets a completed project – and
you get to keep your client!

Even though you´re not skilled in an area of design, you can outsource the work to someone who is skilled in that area.

Outsourcing Your Difficult Projects

Every web developer understands that clients can be easily swayed when working on a project. Your client may hire you to begin designing in simple HTML, which is your area of expertise, but suddenly at midstream, the client has been convinced by an outside influence that PHP is the “way to go” with their website.

If this happens (and it does happen), you have three choices:

1. Tell the client that you´re going to have to continue in HTML, or you won´t be able to complete the project. You lose both the client and the remainder of your pay.

2. Try to convince the client that PHP is NOT the way to go – good luck!

3. Confidently discuss with your client the time frame, any additional costs, and what types of changes will need to be made to redesign the site in PHP. Once you reach an agreement on a PHP direction, you can outsource the work.

Obviously option number three would be best for you and the client. The client won´t have to start over with a new website developer. You´re already familiar with the client´s website needs, goals and business.

You´ll benefit because you get to keep your client and build a good reputation of being a problem solver. You don´t have to tell your client all the details of your outsourcing plans. You can, however, explain that you have a helper who is experienced in PHP, so the client will understand that you´re not working alone on his/her project.

Outsourcing Your Overflow

Another way to use outsourcing to your advantage is when you are so busy that you must turn clients away. If you´re turning clients away, it might be better to keep the clients and outsource the projects instead. You may feel that you don´t need the extra business, however, remember that there are always slow times in any business. The more faithful clients you have, the better! Also, you may offer additional services which can build a monthly income such as web hosting, site updates, etc.

24Jul/10Off

.NET Framework 3.5

.NET Framework (NetFx or Fx) version 3.5 has two elements to it that must be understood: the green bits and the red bits. The original references to this term are on old blog posts by Soma and Jason. Compared to those two blog entries I have the advantage of 13 months of hindsight :-) , so I will provide here the details behind those descriptions in my own words starting with my own slide:

.NET Framework 3.5

.NET Framework 3.5

When we say red bits, those are Framework bits that exist in RTM today i.e. NetFx v2.0 and NetFx v3.0.

NetFx v3.5 includes updates for those two existing frameworks. However, those updates are not a whole bunch of new features or changes, but in reality a service pack with predominantly bug fixes and perf improvements. So to revisit the terminology: Fx 3.5 includes v2.0 SP1 and v3.0 SP1. Like with all service packs, there should be nothing in there that could break your application. Having said that, if a bug is fixed in the SP and your code was taking advantage of that bug, then your code will break of course. To be absolutely clear, this is an in-place upgrade to v2 and v3, not a side-by-side story at the framework/clr level. I will not focus anymore on the Service Pack (red bits) improvements in Fx 3.5. If you are interested in that you may wish to read my previous blog posts here, here, here and here.

When we say green bits, we mean brand new assemblies with brand new types in them. These are simply adding to the .NET Framework (not changing or removing) just like Fx 3.0 was simply adding to v2.0 without changing existing assemblies and without changing the CLR engine. It is here where you find the brand new stuff to talk about. In Beta 1, the list of new assemblies (green bits) is:

1. System.Data.Linq.dll – The implementation for LINQ to SQL.

2. System.Xml.Linq.dll – The implementation for LINQ to XML.

3. System.AddIn.dll, System.AddIn.Contract.dll – New AddIn (plug-in) model.

4. System.Net.dll – Peer to Peer APIs.

5. System.DirectoryServices.AccountManagement.dll – Wrapper for Active Directory APIs.

6. System.Management.Instrumentation.dll – WMI 2.0 managed provider (combined with System.Management namespace in System.Core.dll).

7. System.WorkflowServices.dll and System.ServiceModel.Web.dll – WF and WCF enhancements (for more on WF + WCF in v3.5 follow links from here).

8. System.Web.Extensions.dll – The implementation for ASP.NET AJAX (for more web enhancements, follow links from here) plus also the implementation of Client Application Services and the three ASP.NET 3.5 controls.

9. System.Core.dll – In addition to the LINQ to Objects implementation, this assembly includes the following: HashSet, TimeZoneInfo, Pipes, ReaderWriteLockSlim, System.Security.*, System.Diagnostics.Eventing.* and System.Diagnostics.PerformanceData.

Collected from Daniel Moth blog

20Jul/10Off

UTF-8 Encoding

In working on a web-based application that needed to support Netscape Communicator 4.x+ and Microsoft Internet Explorer 5.x+, I discovered that the older versions of these browsers had poor support for UTF8 encoding. I needed to find a way to make form field entries URL-safe and also needed to support multiple languages. The JavaScript escape() function fixes ASCII characters that are not valid for use in URLs, but does not handle unicode characters well. To make matters worse, there were browser incompatibilities: using escape() in IE would generate a new string that looked like %unnnn, where n is a hexadecimal digit. The correct encoding should follow RFC 2279 and be a set of hexadecimal digit pairs like %nn%nn. Netscape 4 would just treat the characters as ASCII, which would result in lost accents and umlauts.

The encodeURIComponent() function introduced in IE5.5, Netscape 6, and Mozilla does exactly what is needed. However, since the function is unavailable in Netscape 4.x and IE5, a different solution is needed. All JavaScript strings are unicode, so I expected that it would be possible to properly encode them. Thankfully, someone saw my plea for help and sent me some helpful example code.

I have faced this problem. When using ajax jquery, even I have set the contentType is "application/json;charset=utf-8" it still not working for accented language (like Vietnamese). After searching, I have realized that we have to use this method encodeURIComponent() to encode unicode characters before submitting to server. So I took note it here with a hope that it is useful for ones who may face same problem as mine.

Collected from http://www.worldtimzone.com/res/encode

Filed under: AJAX, ASP.NET, JQuery No Comments