Monday, January 25, 2010

What is .htaccess and how is it being used?

An .htaccess file is a simple ASCII file similar to that created through text editor such as Notepad or Simple Text. Most people are confused with the naming convention for the file. The term .htaccess is not a file .htaccess or somepage.htaccess because it is the file extension simply named as such. Its widely known use is related to implementing custom error page or password protected directories.
Creating the File
The creation of the file is done by opening up a text editor and saving an empty page as .htaccess. If it is not allowed to save an empty page, simply type in one character. An editor probably appends its default file extension to the name. Notepad for one would call the file .htaccess.txt but the .txt or other file extension need to be removed to enable the user to start “htaccessing”. This can be done by clicking the file and renaming it by removing anything that doesn’t say .htaccess. It can also be renamed via telnet or the ftp program.
These files must not be uploaded as binary but rather as ASCII mode. Users can CHMOP the .htaccess file to 644 to make the file usable by the server while preventing it from being read by a browser since this can seriously compromise security. When there are passwords protected directories and a browser can read the .htaccess file, the location of the authentication file can be acquired to reverse engineer the list and thereby completely access any portion that had previously been protected. This can be prevented by either placing all authentication files above root directory thereby rendering the www inaccessible or through an .htaccess series of commands that prevents itself from being accessed by a browser.
Most commands in .htaccess are meant to be placed on one line only thus if a text editor uses word wrap, it should be disabled as it is possible that it might throw in a few characters that might contradict Apache. .htaccess is not for NT servers and is considered an Apache thing. Apache is generally very tolerant of malformed content in an .htaccess file.
The directory in which .htaccess file is placed is “affected” as well as all sub-directories. It a user wishes not to have certain .htaccess commands affect a specific directory, this is done by placing a new .htaccess file within the directory that should not be affected with certain changes and removing the specific command/s. from the new .htaccess file which should not affect the directory. The nearest .htaccess file to the current directory is the one considered as the .htaccess file. A global .htaccess located in the root, if considered the nearest, affects every single directory in the entire site.
Placement of .htaccess should not be done indiscriminately as this may result to redundancy and may cause an infinite loop of redirects or errors. There are sites that do not allow the use of .htaccess files because a server overloaded with domains can be slowed down when all are using .htaccess files. It is possible that .htaccess can compromise a server configuration specifically set-up by the administrator. It is therefore necessary to make sure that the use of .htaccess is allowed before its actual use.
Error documents are only a part of the general use of .htaccess. Specifying one’s own customized error documents will require a command within the .htaccess file. The pages can be named anything and can be placed anywhere within the site as long as they are web-accessible through a URL. The best names are those that would prevent the user from forgetting what the page is being used for.

Password protection is effectively dealt with by .htaccess. By creating a file called .htpasswd, username and the encrypted password of the people to be allowed access are placed in the .htpasswd file. The .htpasswd file should likewise be not uploaded to a directory that is web accessible for maximum security.
Whole directories of a site can be redirected using the .htaccess file without the need to specify each file. Thus any request made for an old site will be redirected to the new site, with the extra information in the URL added on. This is a very powerful feature when used correctly.
Aside from custom error pages, password protecting folders and automatic redirection of users, .htaccess is also capable of changing file extension, banning users with extra certain IP address allowing only users with certain IP addresses, stopping directory listing and using a different file as the index file. Accessing a site that has been protected by .htaccess will require a browser to pop-up a standard username/password display box. However, there are certain scripts available which will allow the user to embed a username/password box in a website to do the authentication. The wide variety of uses of .htaccess facilitates time saving options and increased security in a website.
Many hosts support .htaccess but do not publicize it while many others have the capability for it but do not allow their users to have an .htaccess file. Generally, a server that runs UNIX or any version of the Apache web server will support .htaccess although the host may not allow its use.
When to Use .htaccess Files
The .htaccess files should not be used when there is no access to the main server configuration file. Contrary to common belief, user authentication is not always done in .htaccess files. The preferred way is to put user authentication configuration in the main server configuration.
It should be used in situations where the content provider needs to make configuration changes to the server on a per-directory basis but does not have root access on the server system. Individual users can be permitted to make these changes in .htaccess files for themselves if the server administrator is unwilling to make frequent configuration. As a general rule, the use of .htaccess should be avoided when possible since configuration can be effectively made in a Directory Section in the main server configuration file.
Two main factors warrant avoiding the use of .htaccess files – performance and security. Permitting .htaccess files causes a performance hit whether or not it is actually used, since Apache will look in every directory for such file. The .htaccess file is also looked into every time a document is requested. The Apache search will include .htaccess files in all higher-level directories to have a full complement of directories of application. As such, each file accessed out of the directory results to 4 additional file system accesses even if none was originally present.
The use of .htaccess permits users to modify server configuration which may produce uncontrolled changes. This privilege should be carefully considered before it is given to users. The use of the .htaccess files can be completely disabled by setting the Allow Overide directive to none.

Using .htaccess files lets you control the behavior of your site or a specific directory on your site. For example, if you place an .htaccess file in your root directory, it will affect your entire site ( If you place it in a /content directory, it will only affect that directory (

.htaccess works on our Linux servers.

Using an .htaccess file, you can:

Customize the Error pages for your site.
Protect your site with a password.
Enable server-side includes.
Deny access to your site based on IP.
Change your default directory page (index.html).
Redirect visitors to another page.
Prevent directory listing.
Add MIME types.

Introduction to e-mail

E-mail is considered as being the most widely used service on the Internet. So the TCP/IP protocol suite offers a range of protocols allowing the easy management of email routing over the network.

The SMTP protocol

The SMTP protocol (Simple Mail Transfer Protocol) is the standard protocol enabling mail to be transferred from one server to another by point to point connection.

This is a protocol operating in online mode, encapsulated in a TCP/IP frame. The mail is sent directly to the recipient's mail server. SMTP protocol works using text commands sent to the SMTP server (on port 25 by default). Each command sent by the client (validated by the ASCII character string CR/LF, equivalent to a press on the enter key) is followed by a response from the SMTP server comprising of a number and a descriptive message.

Here is a scenario of a request for sending mail to an SMTP server

When opening the SMTP session, the first command to be sent is the HELO command followed by a space (written ) and the domain name of your machine (in order to say "hello, I am this machine"), then validated by enter (written ). Since April 2001, the specifications for the SMTP protocol, defined in RFC 2821, mean that the HELO command is replaced by the EHLO command.
The second command is "MAIL FROM:" followed by the email address of the originator. If the command is accepted the server sends back the message "250 OK"
The next command is "RCPT TO:" followed by the email address of the recipient. If the command is accepted the server sends back the message "250 OK"
The DATA command is the third stage for sending email. It announces the start of the message body. If the command is accepted the server sends back an intermediary message numbered 354 indicating that the sending of the email body can begin and considers the collection of following lines until the end of the message located by a line containing only a dot. The email body possibly contains some of the following headers:
If the command is accepted the server sends back the message "250 OK"
Here is an example of a transaction between a client (C) and an SMTP server (S)
S: 220 SMTP Ready
S: 250

S: 250 OK

S: 250 OK

S: 550 No such user here
S: 354 Start mail input; end with .

C: Subject: Hello
C: Hello Meandus,
C: How are things?
C: See you soon!
C: .

S: 250 OK
R: 221 closing transmission
The basic specifications of the SMTP protocol mean that all the characters sent are coded in ASCII code over 7 bits and that the 8th bit is explicitly put at zero. So to send accented characters it is necessary to resort to algorithms integrating MIME specifications:

base64 for attached files
quoted-printable (abbreviated to QP) for special characters contained within the message body
It is therefore possible to send an email using a simple telnet on port 25 of the SMTP server:

telnet 25
(the server indicated above is deliberately nonexistent, you can try by replacing by the domain name of your internet service provider)
Here is a summary of the principal SMTP commands

Command Example Description
HELO (now EHLO) EHLO Identification using the IP address or domain name of the originator computer
MAIL FROM: MAIL FROM: Identification of the originator's address
RCPT TO: RCPT TO: Identification of the recipient's address
DATA DATA message Email body
QUIT QUIT Exit the SMTP server
HELP HELP List of SMTP commands supported by the server
All the specifications for the SMTP protocol are defined in RFC 821 (since April 2001, the SMTP protocol specifications are defined in RFC 2821).

The POP3 protocol

The POP protocol (Post Office Protocol) as its name indicates makes it possible to go and collect the email on a remote server (POP server). It is necessary for people not permanently connected to the Internet so that they can consult emails received offline.

There are two main versions of this protocol, POP2 and POP3, to which ports 109 and 110 are allocated respectively and which operate using radically different text commands.

Just like with the SMTP protocol, the POP protocol (POP2 and POP3) works using text commands sent to the POP server. Each of these commands sent by the client (validated by the CR/LF string) comprises a key word, possibly accompanied by one or several arguments and is followed by a response from the POP server comprising of a number and a descriptive message.

Here is a summary of the principal POP2 commands:

POP2 Commands
HELLO Identification using the IP address of the originator computer
FOLDER Name of the inbox to be consulted
READ Number of the message to be read
RETRIEVE Number of the message to be picked up
SAVE Number of the message to be saved
DELETE Number of the message to be deleted
QUIT Exit the POP2 server
Here is a summary of the principal POP3 commands

POP3 Commands
USER identification This command makes it possible to be authenticated. It must be followed by the user name, i.e. a character string identifying the user on the server. The USER command must precede the PASS command.
PASS password The PASS command makes it possible to specify the user's password where the name has been specified by a prior USER command.
STAT Information on the messages contained on the server
RETR Number of the message to be picked up
DELE Number of the message to be deleted
LIST [msg] Number of the message to be displayed
NOOP Allows the connection to be kept open in the event of inactivity
TOP Command displaying n lines of the message, where the number is given in the argument. In the event of a positive response from the server, it will send back the message headers, then a blank line and finally the first n lines of the message.
UIDL [msg] Request to the server to send back a line containing information about the message possibly given in the argument. This line contains a character string called a unique identifier listing, making it possible to uniquely identify the message on the server, independently of the session. The optional argument is a number relating to a message existing on the POP server, i.e. an undeleted message).
QUIT The QUIT command requests exit from the POP3 server. It leads to the deletion of all messages marked as deleted and sends back the status of this action.
The POP3 protocol thus manages authentication using the user name and password; however, it is not secure because the passwords, like the emails circulate in plain text (in an unencrypted way) over the network. In reality, according to RFC 1939, it is possible to encrypt the password using the MD5 algorithm and thus benefit from secure authentication. However, since this command is optional, few servers implement it. Furthermore, POP3 protocol blocks inboxes during access which means that simultaneous access of the same inbox by two users is impossible.

In the same way that it is possible to send an email using telnet, it is also possible to access your incoming mail using a simple telnet over the port for the POP server (110 by default):

telnet 110
(the server indicated above is deliberately nonexistent, you can try by replacing by the domain name of your internet service provider)
S: +OK POP3 service
S: (Netscape Messaging Server 4.15 Patch 6 (built Mar 31 2001))
C: USER jeff
S: +OK Name is a valid mailbox
C: PASS password
S: +OK Maildrop ready
S: +OK 2 0
C: TOP 1 5
S: Subject: Hello
S: Hello Meandus,
S: How are things?
S: See you soon!
S: +OK
The data display that you capture depends on the Telnet client that you are using. Depending on your Telnet client, you may need to activate the echo local option.
The IMAP protocol

The IMAP protocol (Internet Message Access Protocol) is an alternative protocol to that of POP3 but offering many more possibilities:

IMAP allows several simultaneous accesses to be managed
IMAP makes it possible to manage several inboxes
IMAP provides more criteria which can be used to sort emails

Geolocation by IP Address

The Internet has become a collection of resources meant to appeal to a large general audience. Although this multitude of information has been a great boon, it also has diluted the importance of geographically localized information. Offering the ability for Internet users to garner information based on geographic location can decrease search times and increase visibility of local establishments. Similarly, user communities and chat-rooms can be enhanced through knowing the locations (and therefore, local times, weather conditions and news events) of their members as they roam the globe. It is possible to provide user services in applications and Web sites without the need for users to carry GPS receivers or even to know where they themselves are.

Geolocation by IP address is the technique of determining a user's geographic latitude, longitude and, by inference, city, region and nation by comparing the user's public Internet IP address with known locations of other electronically neighboring servers and routers. This article presents some of the reasons for and benefits of using geolocation through IP address, as well as several techniques for applying this technology to an application, Web site or user community.

Why Geolocation?
The benefits of geolocation may sound complex, but a simple example may help illustrate the possibilities. Consider a traveling businessman currently on the road to San Francisco. After checking into his hotel, he pulls out his laptop and hops onto the wireless Internet access point provided by the hotel. He opens his chat program as well as a Web browser. His friends and family see from his chat profile that he currently is near Golden Gate Park. Consequently, they can determine his local time. By pulling up a Web browser, furthermore, the businessman can do a localized search to find nearby restaurants and theaters.

Without having to know the address of the hotel he's staying in, the chat program and Web pages can determine his location based on the Internet address through which he is connecting. The following week, when he has returned to his home in Florida, he uses his laptop to log into a chat program, and his chat profile correctly places him in his home city. There is no need to change computer configurations, remember addresses or even be aware, as the user, that you are benefitting from geolocation services.

Possible applications for geolocation by IP address exist for Weblogs, chat programs, user communities, forums, distributed computing environments, security, urban mapping and network robustness. We encourage you to find out what applications and Web sites currently employ geolocation or could be enhanced by adding support.

Although several methods of geographically locating an individual currently exist, each system has cost and other detriments that make them technology prohibitive in computing environments. GPS is limited by line-of-sight to the constellation of satellites in Earth's orbit, which severely limits locating systems in cities, due to high buildings, and indoors, due to complete overhead blockage. Several projects have been started to install sensors or to use broadcast television signals (see Resources) to provide for urban and indoor geolocation. Unfortunately, these solutions require much money to cover installation of new infrastructure and devices, and these services are not supported widely yet.

By contrast, these environments already are witnessing a growing trend of installing wireless access points (AP). Airports, cafes, offices and city neighborhoods all have begun installing wireless APs to provide Internet access to wireless devices. Using this available and symbiotic infrastructure, geolocation by IP address can be implemented immediately.

Geolocation Standards and Services
As discussed below, several RFC proposals have been made by the Internet Engineering Task Force (IETF) that aim to provide geolocation resources and infrastructure. However, these standards have met with little support from users and administrators. To date, there has not been much interest in providing user location tracking and automatic localization services. Several companies now offer pay-per-use services for determining location by IP. These services can be expensive, however, and don't necessarily offer the kind of functionality a programmer may want when designing his or her Web site or application.

Several years ago, CAIDA, the Cooperative Association for Internet Data Analysis, began a geolocation by IP address effort called NetGeo. This system was a publicly accessible database of geographically located IP addresses. Through the use of many complex rules, the NetGeo database slowly filled and was corrected for the location of IP addresses. The project has been stopped, however, and the technology was licensed to new partners. However, the database still is available, although several years old, and provides a good resource for determining rough locations.

To query the NetGeo database, an HTTP request is made with the query IP address, like this:

LAT: 33.98
LONG: -118.45
LAST_UPDATED: 16-May-2001
LOOKUP_TYPE: Block Allocation

As you can see, the NetGeo response includes the city, state, country, latitude and longitude of the IP address in question. Furthermore, the granularity (LAT_LONG_GRAN) also is estimated to give some idea about the accuracy of the location. This accuracy also can be deduced from the LAST_UPDATED field. Obviously, the older the update, the more likely it is that the location has changed. This is true especially for IP addresses assigned to residential customers, as companies holding these addresses are in constant flux.

In order to make this database useful to an application or Web site, we need to be able to make the request through some programming interface. Several existing packages assist in retrieving information from the NetGeo database. The PEAR system has a PHP package (see Resources), and a PERL module, CAIDA::NetGeo::Client, is available. However, it is a relatively straightforward task to make a request in whatever language you are using for your application or service. For example, a function in PHP for getting and parsing the NetGeo response looks like this:
function getLocationCaidaNetGeo($ip)
2: {
3: $NetGeoURL = "".$ip;
5: if($NetGeoFP = fopen($NetGeoURL,r))
6: {
7: ob_start();
9: fpassthru($NetGeoFP);
10: $NetGeoHTML = ob_get_contents();
11: ob_end_clean();
13: fclose($NetGeoFP);
14: }
15: preg_match ("/LAT:(.*)/i", $NetGeoHTML, $temp) or die("Could not find element LAT");
16: $location[0] = $temp[1];
17: preg_match ("/LONG:(.*)/i", $NetGeoHTML, $temp) or die("Could not find element LONG");
18: $location[1] = $temp[1];
20: return $location;
21: }
Using DNS to Your Advantage
As previously mentioned, the NetGeo database slowly is becoming more inaccurate as IP address blocks change hands in company close-outs and absorptions. Several other tools are available for determining location, however. A description of the NetGeo infrastructure itself (see Resources) presents some of the methods it employed for mapping IP addresses and can be a source of guidance for future projects.

One of the most useful geolocation resources is DNS LOC information, but it is difficult to enforce across the Internet infrastructure. RFC 1876 is the standard that outlines "A Means for Expressing Location Information in the Domain Name System." Specifically, this is done by placing the location information of a server on the DNS registration page. Several popular servers have employed this standard but not enough to be directly useful as of yet.

To check the LOC DNS information of a server, you need to get the LOC type of the host:

$ host -t LOC LOC 37 23 30.900 N 121 59 19.000 W 7.00m 100m 100m 2m

This parses out to 37 degrees 23' 30.900'' North Latitude by 121 degrees 59' 19.000'' West Longitude at 7 meters in altitude, with an approximate size of 100 meters at 100 meters horizontal precision and 2 meters vertical precision. There are several benefits to servers that offer their geographic location in this way. First, if you are connecting from a server that shows its DNS LOC information, determining your geolocation is simple, and applications may use this information without further work, although some verification may be useful. Second, if you are connecting on your second or third bounce through a server that has DNS LOC information, it may be possible to make an estimate of your location based on traffic and ping times. However, it should be obvious that these estimates greatly degrade accuracy.

It also is possible to put the DNS LOC information for your Web site in its registration (see Resources). If more servers come to use LOC information, geolocation accuracy will be much easier to attain.

Sidebar: host
host is a DNS lookup utility that allows users to find out various pieces of information about a host. The simplest use is doing hostname to IP address lookups and the reverse. The reverse, dotted-decimal IPv4 notation, is used for this, and the actual server that hosts the canonical name is returned. The type flag, -t, can be used to obtain specific information from the host record from the name server.

Where There's a Name, There's a Way
Many users hopping onto the Internet probably aren't coming from a major server. In fact, most users don't have a static IP address. Dial-up, cable modems and cell phone connections are assigned a dynamic IP address that may change multiple times in one day or not at all for several weeks. Therefore, it becomes difficult to tie these dynamic addresses to a single location.

To our rescue, these service providers typically provide an internal naming scheme for assigning IP addresses and associating names with these addresses. Typically, the canonical name of an IP address contains the country-code top-level domain (ccTLDs) in a suffix. CN is China, FR is France, RO is Romania and so on. Furthermore, the name even may contain the city or region in which the IP address is located. Often, however, this information is shortened to some name that requires a heuristic to determine. For example, in your service or application, a user may appear to be coming from A whois at this address reveals it is a WideOpenWest account from Michigan. Using some logic, it is possible to deduce that this user is connecting through a server located in Troy, MI, hence the .try. in the canonical name.

Some projects have been started to decipher these addresses (see Resources), and you also can get all of the country codes and associated cities and regions of a country from the IANA Root-Zone Whois Information or the US Geospatial Intelligence Agency, which hosts the GEOnet Names Server (GNS). The GNS has freely available data files on almost all world countries, regions, states and cities, including their sizes, geographic locations and abbreviations, as well as other information.

Information such as that presented on the GNS also can be used to provide users with utilities and services specific to their geographical locations. For example, it is possible to determine a user's local currency, time zone and language. Time zone is especially useful for members of a community or chat group to determine when another friend may be available and on-line.

Where Are You Located?
Now that we've explained some of the techniques that can be used in geolocating Internet users by their IP addresses, we offer you a chance to try it out. Point your Web browser of choice here, and see how accurate or inaccurate the current results are. Please leave comments below about the accuracy of your results as well as any ideas you may have.

Releasing the Chromium OS open source project

This Article is by GOOGLE:
In July we announced that we were working on Google Chrome OS, an open source operating system for people who spend most of their time on the web.

Today we are open-sourcing the project as Chromium OS. We are doing this early, a year before Google Chrome OS will be ready for users, because we are eager to engage with partners, the open source community and developers. As with the Google Chrome browser, development will be done in the open from this point on. This means the code is free, accessible to anyone and open for contributions. The Chromium OS project includes our current code base, user interface experiments and some initial designs for ongoing development. This is the initial sketch and we will color it in over the course of the next year.

We want to take this opportunity to explain why we're excited about the project and how it is a fundamentally different model of computing.

First, it's all about the web. All apps are web apps. The entire experience takes place within the browser and there are no conventional desktop applications. This means users do not have to deal with installing, managing and updating programs.

Second, because all apps live within the browser, there are significant benefits to security. Unlike traditional operating systems, Chrome OS doesn't trust the applications you run. Each app is contained within a security sandbox making it harder for malware and viruses to infect your computer. Furthermore, Chrome OS barely trusts itself. Every time you restart your computer the operating system verifies the integrity of its code. If your system has been compromised, it is designed to fix itself with a reboot. While no computer can be made completely secure, we're going to make life much harder (and less profitable) for the bad guys. If you dig security, read the Chrome OS Security Overview or watch the video.

Most of all, we are obsessed with speed. We are taking out every unnecessary process, optimizing many operations and running everything possible in parallel. This means you can go from turning on the computer to surfing the web in a few seconds. Our obsession with speed goes all the way down to the metal. We are specifying reference hardware components to create the fastest experience for Google Chrome OS.

There is still a lot of work to do, and we're excited to work with the open source community. We have benefited hugely from projects like GNU, the Linux Kernel, Moblin, Ubuntu, WebKit and many more. We will be contributing our code upstream and engaging closely with these and other open source efforts.

Google Chrome OS will be ready for consumers this time next year. Sign up here for updates or if you like building your operating system from source, get involved at

Lastly, here is a short video that explains why we're so excited about Google Chrome OS.

Wednesday, January 20, 2010


Asynchronous JavaScript + XML (AJAX) is essentially a branding term for a bundle of common web technologies. These include JavaScript, DHTML and a utility object called XMLHTTP. The short story is that in combination, these tools reduce the need for web browser applications to reconnect to a web server every time additional data is downloaded.

The means for accomplishing this have been around for quite some time. Suddenly though, and thanks largely to applications like Google Maps, its all the rage.

Not to be left behind, Microsoft has announced project Atlas for ASP.NET 2.0 and for ASP.NET 1.1, we already have Michael Schwarz's Ajax.Net.

According to one Atlas project member, “What we’ve set out to do is to make it dramatically easier for anyone to build AJAX-style web applications that deliver rich, interactive, and personalized experiences. Developers should be able to build these applications without great expertise in client scripting; they should be able to integrate their browser UI seamlessly with the rest of their applications; and they should be able to develop and debug these applications with ease.”

Atlas is being developed on top of ASP.NET 2.0 and is slated to contain the following components:

Atlas Client Script Framework
The Atlas Client Script Framework is an extensible, object-oriented 100% JavaScript client framework that allows you to easily build AJAX-style browser applications with rich UI and connectivity to web services. With Atlas, you will be able to write web applications that use a lot of DHTML, JavaScript, and XMLHTTP, without having to be an expert in any of these technologies.

The Atlas Client Script Framework will work on all modern browsers, and with any web server. It also won’t require any client software installations, only standard script references in the web page.

The Atlas Client Script Framework will include the following components:

An extensible core framework that adds features to JavaScript such as lifetime management, inheritance, multicast event handlers, and interfaces
A base class library for common features such as rich string manipulation, timers, and running tasks
A UI framework for attaching dynamic behaviors to HTML in a cross-browser way
A network stack to simplify server connectivity and access to web services
A set of controls for rich UI, such as auto-complete textboxes, popup panels, animation, and drag and drop
A browser compatibility layer to address scripting behavior differences between browsers.
ASP.NET Server Controls for Atlas
For ASP.NET applications, a new set of AJAX-style ASP.NET Server Controls will be developed and the existing ASP.NET page framework and controls will be enhanced to support the Atlas Client Script Framework.

The Atlas Client Script Framework will fully support ASP.NET 2.0 client callbacks, but will enrich the level of integration between the browser and the server. For example, you will be able to data bind Atlas client controls to ASP.NET data source controls on the server, and you’ll be able to control personalization features of web parts pages asynchronously from the client.

ASP.NET Web Services Integration
Like any client application, an AJAX-style web application will usually need to access functionality on the web server. The model for connecting to the server for Atlas applications is the same as for the rest of the platform – through the use of Web services.

With ASP.NET Web Services Integration, Atlas applications will be able to access any ASP.NET-hosted ASMX or Indigo service directly through the Atlas Client Script Framework, on any browser that supports XMLHTTP. The framework will automatically handle proxy generation, and object serialization to and from script. With web services integration, you can use a single programming model to write your services, and use them in any application, from browser-based sites to full smart client applications.

ASP.NET Building Block Services for Atlas
With ASP.NET 2.0 Microsoft have built a set of building block services that make it easy to build personalized web applications. These building blocks reduce the amount of code you have to write for common web application scenarios, such as managing users, authorizing users by role, and storing profiles and personalized data.

With Atlas, these building blocks will be made accessible as web services that can be used from the client framework in the browser or from any client application.

Client Building Block Services
In addition to DHTML, JScript, and XMLHTTP, the Atlas project is looking at other services that allow websites to harness the power of the client to deliver an enriched experience.

The local browser cache is an example of such a service. When enabled, websites can store content in that cache and later retrieve it efficiently. The problem is that there is no API from the browser to store data in the cache. With Atlas, the plan is to provide programmable access to a local store/cache, so that applications can locally cache data easily, efficiently and securely.

The Atlas team are additionally looking at other hooks into local applications and resources. These will be defined in more detail as the project matures.

The current intention is to have a preview release of Atlas ready for the Professional Developers Conference (PDC 05) in September.

AJAX for ASP.NET 1.1
For those chomping at the bit, the good news is that thanks to blogger and .NET developer Michael Schwarz there is already a fairly impressive implementation of AJAX for ASP.NET 1.1. The Ajax.NET project is (as of recently) an open source project with a home on SourceForge. It is a fairly mature effort with a lot of community backing, some solid documentation, and some easily digestible sample code.

ASP.NET, AJAX and jQuery

Check out this SlideShare Presentation:

Friday, January 15, 2010

How To Use BitTorrent – Beginners Guide

The program that downloads files from BitTorrent is referred to as a BitTorrent client. So, if you want to use BitTorrent, you need to install a BitTorrent client on your computer. The most popular are BitTorrent, Transmission (for Mac OS X), ABC, Azureus, BitComet, BitTornado, and uTorrent.

The first step is to install one of these programs. I like BitComet because It’s easy to use and downloads fast. So just download this file and run it to install the client.

Most of these clients do not have search boxes in them, so what you need to do is open a Torrent file (referred to simply as a Torrent) with the program (client), so that the program can find what you’re looking for. A Torrent is a link to a certain file you can download. For example, a few different people have probably made video files out of, say, “Revenge of the Sith”. Each time someone puts their version of Revenge of the Sith online for BitTorrent users to download, a new Torrent is created that links to that specific version of the movie. Get your hands on a Torrent, and your BitTorrent client will then be able to use it to download the version of Revenge of the Sith that corresponds to that Torrent.

So how do you get the Torrent ? typically, you go to Torrent search engines, which are basically huge databases of Torrents. Popular examples include MiniNova, ISO Hunt, TorrentSpy, and the infamous Pirate Bay.

You may also find torrents that can be downloaded from websites, like this great catalogue of TV-show torrents. You can then right-click the link that takes you to the torrent file and “Save Target As” (Internet Explorer) or “Save Link As” (FireFox). Then just open the file with your BitTorrent client (double-click on the file’s icon) and it will ask you if you want to download the movie. Once you download the video file that this torrent points to, you’ll need to install the Xvid codec in order to be able to play that video file on your media player.

Many video files you’ll find on BitTorrent are encoded in Xvid, as it allows for great video quality despite the small file size. An Xvid file is usually around 700 megs (0.7 Gigs) but looks almost as good as a DVD, which is usually around 9.4 Gigs. So install the Xvid codec, you only have to do it once, and from then on Windows Media Player will be able to play the Xvid video files you get on BitTorrent.

One other codec you’ll encounter often on BitTorrent, since it too allows for near-DVD-quality 700-meg files, is Divx, so install Divx while you’re at it. One thing that is particularly cool about video files that use the Divx codec (other than the fact that they look almost as good as a DVD but only take up 700 megs instead of 9.4 Gigs) is that many DVD players today will play Divx video files. This means that even if you don’t have a DVD burner, you can take a Divx video file and burn onto a data CD and watch on your DVD player.

Say you go to one of those sites and search for The Empire Strikes Back Your results page will look like this.

To the right you see a number of Seeds and Leechers for each Torrent. A Seed is a user who has the whole file and is sharing it. A Leecher is someone who, like you, is in the process of downloading the file. So you’ll want to pick a version of the file that has lots of Seeds and Leechers, so that the download is faster and more certain to be successful. (One interesting note is that BitTorrent can see which parts of the file are most rare – i.e. which parts of the file have been downloaded by the fewest Leechers – and make sure those parts get uploaded from the Seeds right away, that way the Leechers will have more of the file between them). I usually click on the Seeds at the top of that column, so that the search results are arranged with the files with the most Seeds on top. Also, most BitTorrent search sites allow you to browse their entire index of torrent files, organizing them alphabetically, by date, by type, or by number of Seeds, so you can see what files have the most Seeds (and thus download faster, and are probably pretty good). For example, to see what movie torrents on BitTorrent currently have the most seeds, click here or here.

Of course, you should also choose based on the size (do you want a 4-Gig DVD image file or a 700-Meg Divx/Xvid video file ?). Once you have chosen which version(s) to try and download, just click on “Download Torrent”. If you associated Torrent files with the BitTorrent client when you installed it, then the BitTorrent client should start up, ask you if you want to download that file, and then start downloading it, and putting it in some folder that I’m sure can be found and changed if you poke around the settings and preferences.

So to recap, you want (say) a movie file. A bunch of people out there have it. There is this thing called a Torrent file (or just “a Torrent”) which will tell BitTorrent clients (programs) how to download that particular movie file. So you install a BitTorrent client on your computer, use it to open a Torrent that corresponds to the movie file you want to download, and the BitTorrent client can then start talking to the Peers and Seeds who can send you bits of that movie file. After a while, your BitTorrent client will have accumulated the entire movie file, and will dump it into some folder in your hard drive.

SEO - Search Engine Optimization Ranking Tips

Search Engine Criteria

The first thing that you need to know is that most of the major search engines utilize an algorithm to determine where a website ranks. The search engines have setup specific criteria that a website must meet to get to the top of the list. The criteria are different for every engine, but all engines share several commonalities. It all boils down to the type and amount of content provided on a given website, the level of optimization done on the site, and the popularity of the website (link popularity/PageRank). Below we examine these commonalities in more detail.

Understanding Keywords and Search Behavior

In order to rank a website, you must first identify and understand your target audience. Keywords can tell you a lot about the type of user that will be potentially visiting your website. Many times it is best to consult an optimization professional during this process. Most individuals can determine which keywords are best for their website, if they are using the right tools, which are readily available.

The difficult task if determining which keywords are most relevant, and realistic to optimize for. One must also determine which keywords to optimize for on the various pages. Search engine optimization is often referred to as "an art," and this is a perfect example of where it takes a professional's touch to achieve the best results. When competing for any keyword, you can be sure that there is anywhere from 100,000 to 100,000,000+ websites competing against you. Proper keyword targeting is crucial for a successful search engine marketing and optimization campaign.

The first thing to realize when targeting keywords is that it is not all about ranking for the most popular keyword. The most successful search engine marketing and optimization campaigns target the most relevant keywords. As an example, Hal's Auto Dealership in Wisconsin will probably never rank for the keyword "car." You must realize that the top 500+ results for the keyword "car" all have a much larger web presence when compared to Hal's Auto Dealership. Hal's Auto Dealership would be much better suited targeting keywords like "car dealership in Wisconsin," "used cars in Wisconsin," "new cars in Wisconsin," and the like.

There are several tools available from Google, Overture, and third party software developers that can make the keyword research process easier. We utilize a combination of these tools and others that have been licensed to create a professional analysis for our clients.

Content, Content, Content

Again, thinking at the algorithm level, how do search engines rank websites? The answer is quite simple: Those who have superior content rule. The phrase "content is king" was born about 4 years ago, and it still holds true today. If you want to be relevant for specific keywords, than you need superior or at least highly competitive content. With that said if you want to rank for a slew of different keyword phrases, than realize that you require a slew of relevant content. On average, one can target between 3-5 like keyword phrases per page.

This is another example of where a search engine optimization professional can lend some expertise. An expert can help determine what content you need, when compared to the type of keywords that you want to target. We are one of many firms that offer this type of consulting.

Are you Optimized?

Now that you have determined the keyword you need to rank for, and you have the right amount of content in place, you need to optimize your website to stand out from the crowd. Regarding on-page scripting the search engines are really looking for 2 things: 1) keywords in the Meta and Title fields and 2) keywords in the body of your website.

Regarding Meta Tags there are 2 very important fields:

1) Title Tag - arguably the most important SEO tag for any website. Google, Fast/Alltheweb/Lycos, and Ask Jeeves support approx. 60 characters in the title, while Inktomi and Altavista allows for up to 110 characters in the title. It is important to target the most critical keywords in the Title. Every page should have a unique Title.

2) META Description Tag - also very important for every page on the site. Some engines do display the description defined, while others do not. All search engines do read the description tag, and do utilize the content found within in the ranking process. A good rule of thumb is to create descriptions that do not exceed 200-250 characters.

The META keyword Tag is essentially useless in today's SEO market, but is often times good to utilize as a placeholder for the keywords targeted.

The other optimization element to keep in mind is body content. Search engines are looking for dense content, with the right amount of keyword density and/or keyword placement. Depending on the architecture of a specific website, one may consider various means of placing keywords within body content. Typically it is best to place body content within a viewable area of the website. However, some websites may be graphic intense or within frames, or flash. In this scenario you will need to place keywords in ALT Tags, or No-Script tags.

Keep in mind that if you are following the rules published at the various search engines, that you must provide relevant content at all times, and it is always best to show all content to users that you show search engines.

At the end of the day, you must provide spiders with the keywords you want to rank in a variety of places, including META tags, Title, ALT Tags, Body content, links, etc. Enlisting the help of a professional is often times the most economical means of accomplishing these various nuances.

Are you popular?

So now you have determined the right keywords on the right pages, you've created all of the necessary content; you've optimized all of the content to the best of your capabilities. Congratulations - you're now in the top 80 percentile (from an optimization standpoint) of the websites listed for the keywords you're trying to target. So how do you get a 1st, 2nd, or 3rd page listing? The answer is quite simple: You have to be the most popular too. That's right, it's a popularity contest. In other words, how many other websites know you (link to you), and how popular are they?

This is typically referred to as link popularity, or called PageRank by Google. Every algorithm uses a different form of link popularity, PageRank being the most sought after because of the enormous Google network. Essentially, every website is given a score somewhere between a 0 and a 10. Google and a small handful of other sites on the web score a 10 - which is a perfect and absolutely unattainable score. You need a high score to target popularly searched keywords.

Consulting an expert here is almost mandatory for success.

Paid Inclusion

Once your site if fully optimized you'll need to submit to the engines, or pay the engines to guarantee that you will be indexed into their database. Paid Inclusion doesn't mean better rankings, but it does mean you are guaranteed the chance to rank because you will be listed in the databases. Currently you can perform Paid Inclusion with Inktomi, Altavista, Fast Search/Alltheweb/Lycos, Altavista, and Ask Jeeves. We can provide Paid Inclusion services, and does incorporate Paid Inclusion into many of its Search Engine Optimization and Placement packages.