No, but there is more...much more thanks to my new friend ajax.

As I mentioned in the previous post, I made a static website in html, css, javascript, and json. It works pretty good and that’s great. The problem is my sister is the one who should be adding/editing content because it is her site. So the question is, how can I make an easily update-able site that is on a server without any server-side coding functionality?

I had figured out pretty much how I wanted it to work even before I finished coding the first version of the site. Sure, I could just write a client application that spits out html\css code and uploads it to the server, but no, that isn’t cool enough.

I wanted to break up the site data and formatting. That way the client application would only need to create json files and upload them along with any new content.

That means the json data needs to be removed from the html files and the html page now needs to allow fetching of the data whenever the person using the site needs it. What does this mean? It means that I now have a site that is comprised of: index.htm, a css file, and a javascript file. This is pretty much the whole website (not including the json files)! How can this be possible? Thank our new friend ajax (Asynchronous JavaScript and XML). What that does is allow javascript code to fetch data from the server (in my case text files containing json data) when the user does some type of action.

Lets get on to a few examples, shall we!

The whole index page:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<link rel="stylesheet" type="text/css" href="sapphirearts.css" />
<script src="sapphirearts.js" type="text/javascript"></script>
<body onload="getJsonFromServer('links.json', 'init');">
<div id="divcontainer">
<div id="divbanner"></div>
<div id="divlinks"></div>
<div id="divbody">
<div id="divbodycontent"></div>
<div id="divfooter"></div>

What happens when this page is loaded on to a client computer? The javascript function getJsonFromServer() is called. This function downloads a block of json data that defines the main links for the site. Pretty cool, eh? When you click one of these main links, they call getJsonFromServer() again, but with different parameters and ends up filling the content area of the page.

function getJsonFromServer(filename, newstate) {

xmlHttp = getXmlHttpObject();

if (xmlHttp != null) {

                // save the state so we can process it later

currentState = newstate; 

xmlHttp.onreadystatechange = stateChanged;"GET", filename, true);




Which relies on the function:

function getXmlHttpObject() {
var objXMLHttp = null;
if (window.XMLHttpRequest) {
objXMLHttp = new XMLHttpRequest();
} else if (window.ActiveXObject) {
objXMLHttp = new ActiveXObject("Microsoft.XMLHTTP");
return objXMLHttp;

The getXmlHttpObject() function needs to execute different functions based on what bowser the user is using. Just one of the multitude of problems getting cross browser compatibility.