.Net Archive

SXSW to Go: Creating Razorfish’s iPhone Guide to Austin (Part 1)

Once a year, the internet comes to visit Austin, Texas at the South by Southwest Interactive (SXSWi) conference, and, for 2009, the Razorfish Austin office was determined to leave an impression. We ended up making close to 3,000 impressions.

Industry leaders and the web avante-garde converge on Austin for one weekend each year to learn, network, and see the cutting edge of interactive experience and technology. And also to take advantage of any number of open bars. It is a conference, after all.

The Razorfish Austin office typically plays host to a networking event and takes out ad space in the conference guidebook. In 2009, confronted with shrinking budgets in the wake of the global financial crisis, we knew we had to set ourselves apart and do it on the cheap.

iPhone Apps were on everyone’s mind (and would be in every conference-attendee’s pocket), and would prove to be the perfect venue to showcase Razorfish’s skill and Austin’s personality. In late January 2009, Three presentation layer developers and a creative director formed a small team and set out to build an iPhone-ready guide to Austin.

Over this series of articles, I’ll be diving into how we created the Razorfish Guide to SXSW iPhone-optimized web site. Part 1 will deal with requirements gathering and technology choices, part 2 will cover design and development, and part 3 will talk about what we did to optimize the mobile experience.


The first thing we did as a team was to sit down and discuss what the guide had to be. Going in, we knew we wanted it to be on the iPhone because of the cachet associated with the device. We also knew that we had a very condensed timeline to work in – we needed to launch in 5-6 weeks, and we all had other projects that required our focus.

To App, or not to App?

One of the first decisions we made was to approach the guide as an iPhone Web App, rather than building an Objective-C compiled application. We knew that we didn’t have a technical resource who already knew Objective-C available and that we would have trouble getting approval and into the App Store in time for our launch. Most importantly, we needed as many people as possible to be able to use the guide, and didn’t have time to create different versions for different devices.

iPhone Web Applications offer not only a way to leverage the iPhone’s impressive graphical capabilities, thanks to Safari mobile’s excellent standards and future CSS support, but also a way to reach other platforms using progressive enhancement (testing for a feature, and then enhancing the experience for clients that support that feature).

Mobile madness

There are dozens, if not hundreds, of mobile browsers out there, with wildly differing interpretations of CSS and JavaScript. Check out Peter-Paul Koch’s CSS and JavaScript mobile compatibility tables if you need convincing. Supporting multiple mobile devices is no cakewalk, especially since many of them have incorrect or misleading user agents.

The iPhone was our target, and some mobile browsers, such as many versions of Opera Mobile, also have relatively good standards support, but what about IE Mobile or Blackberry?

We quickly came to the conclusion that, because of the condensed timeline, we should test in and support Safari Mobile only, however, that the site also needs to be fully usable with no CSS or JavaScript whatsoever. By ensuring this baseline level of functionality, we could be certain that even the Blackberry browser could at least limp across the finish line.

Back to the desktop

Along with choosing mobile browsing platforms to support, we also had to decide for which desktop browsers to design the site. Ordinarily, desktop compatibility testing is dominated by Internet Explorer 6, but this site was geared towards web designers and developers.

That means more people would be visiting the site using Chrome than would be IE6.

IE6 was swiftly kicked to the curb, and we settled on fully supporting Firefox 3, Safari 3 and Chrome, with basic support for Internet Explorer 7. Safari and Chrome support came almost for free, because the two render almost identically to iPhone’s Safari Mobile.

Site be nimble, site be quick

Supporting mobile devices supporting weak signals, slow connections, small screens, bite-sized memory, and users who are on the go. There are a number of factors conspiring against any mobile website, and we knew that we would have to eke every last bit of performance out in order to overcome them.

Limit the chatter

Client interaction with the server not only increases design complexity, but it also increases the size and number of requests. There were several key factors that made us decide to keep forms and complex interactivity out of the site:

  • Applications that use forms have to validate the data, and guard against attacks. This can slow down the experience, and also would require a more in-depth security review.

  • POST requests are slow. Data-heavy responses are slow. Increasing the number of requests involved in typical usage puts a heavier burden on the server and delays the user in getting from point A to point B.

  • Sites that can be customized or that allow the user to log in typically can’t cache data as efficiently, because page data is often sensitive to the user.

To make the site run quickly, launch on time, and be successful in its goals, the application would be focused on being the best guide it could be, and not on integrating your Twitter account and kitchen sink.

Sell the brand

Lastly, the guide had to make Razorfish look good and leave a strong impression of who we are and what we’re all about. If the guide was as informative and fast and easy to use as can be, but didn’t sell our brand, it would be a failure.


Based on the requirements we gathered, the team picked familiar development libraries and languages to work with.

XHTML, CSS and JavaScript

These languages should come as no surprise, as they’re integral to all web applications. An important decision that we did make, however, was that no JavaScript or CSS frameworks should be used.

For desktop development, our industry has become increasingly reliant on JavaScript frameworks to smooth out cross-browser wrinkles and speed up our work. Generally, JavaScript frameworks excel at meeting both of those goals.

There are a couple problems when considering a JavaScript framework for mobile development:

  • Frameworks add a lot of bulk to the page. 54 KB for jQuery 1.3 isn’t much on the desktop, where fast internet connections are common, but it’s painful over 2G wireless connections used by many mobile phones (the first iPhone model included).

  • When you’re targeting a single platform (or a standards-compliant platform), a lot of the framework’s code is going to go to waste. Much of the code in JavaScript libraries is for abstracting cross-browser compatibility issues.

  • When you’re targeting multiple mobile platforms, most frameworks aren’t built with mobile in mind, and may be unable to perform properly regardless.

  • iPhone doesn’t cache components that are over 25 KB in size. (Unfortunately, this is when the component is decompressed, so it doesn’t matter if the component is under 25 KB when GZIP compression is used.)

  • The framework’s code has to be executed on the client in order to initialize all of the framework’s components. On slower clients, such as mobile devices, this is a longer delay than you might think, and many of those features probably won’t be used on the site.

In the future, JavaScript frameworks may overcome these challenges, but we resigned ourselves to starting from scratch for this project.

CSS frameworks were out of the question for many of the same reasons.


The ASP.NET MVC Framework was chosen as our server-side technology primarily because of the team’s familiarity with it. Having just recently used the technology on other projects, it was still fresh in our minds. The MVC framework allows for quick, clean and very functional design that you have a great deal of control over.


We elected to use our internally-developed .NET library that’s specialized for use on web projects. Razorfish.Web has a number of features that made it indispensible for this project, such as dynamic CSS and JavaScript compression. As I’ll cover later, we extended the library while building the guide to push optimization even further.

SQL Server

Microsoft’s database engine was the natural choice to go along with ASP.NET MVC. We used LINQ to SQL to easily communicate with the database from the web server.

With our tools selected, we were ready to start building the site. Come back for part 2 to learn about some key design and development decisions that went into making sxsw.razorfish.com.

Leveraging Model Driven Development

[caption id=“attachment_306” align=“alignright” width=“212” caption=“Project Triangle”]Project Triangle[/caption]

Achieving efficiency in the software development process is one of the key motivators every team should strive for. Efficiency can be measured in a variety of ways. The most obvious measurements are cost, project timeline, and the feature set that can be implemented given the first two. In a sense, it boils down to the old project triangle (remember: pick any two of the criteria).

In essence, there is a trade-off between quality, timeline, and cost. For example, reducing the timeline at equal costs reduces quality just as implementing at a faster pace reduces quality. Yet I argue that the triangle approach is not necessarily valid anymore. Traditional development processes have clearly shown that just enhancing the timeline on a project to put special care into the design does not actually lead to higher quality software – quite the contrary.

Yet more dimensions are at play. The number of defects (“bugs”) found in a particular software directly translate into cost and time, especially when found late in development cycle, creating a dependency between testing quality, time, and cost. Inefficient software design increases the cost of introducing new functionality as requirements change and a lack of refactoring capabilities sooner or later lead to the need for a full re-development. The problems are amplified when the software spans multiple independent subsystems, which is often the case in modern web architectures which span across content management systems, web services, search engines, commerce engines, custom web applications, etc.

Agile development methodologies have tackled many of these problems in great detail through test-driven development (TDD) and time-boxed iterative release cycles. This article discusses a number of tactics you can deploy in addition to what you find in your agile toolkit: To speed up development and tackle complex problems with smaller teams in less time leveraging the key ideas of Model-Driven-Development (MDD).

Leaning on MDD

Model-Driven-Development (MDD) is a rather interesting software development paradigm which puts the modeling aspect of software engineering at the center of the development process.

[caption id=“attachment_313” align=“alignright” width=“254” caption=“MDD Overview”]MDD Overview[/caption]

The most popular notion of MDD is the Model-Driven-Architecture standard by the Object Management Group (OMG). MDA is based on a variety of modeling artifacts. At top is a platform-independent model (PIM) which only captures the business requirements using appropriate domain-specific language. This model is then translated into any number of platform-specific models (PSM) using a platform definition model (PDM) for each platform. In essence, this is equivalent to modeling your software in a very high-level (business specific way) and then using a translator such as a code generator (the PDM) to convert the model into code (the platform specific model). Given the same business model, the software can automatically be built using C#, Java, and PHP given the correct translation routines.

MDA in theory has a number of advantages to traditional coding:

  • It obviously appeals to the business owner who can finally re-use the conceptual business model across technology trends, i.e. re-implementing the solution using new technologies does not require a complete overhaul but is simply a matter of switching technologies. Numerous companies specialize in MDA and even rapid-prototyping tools exist which integrate agile development methodologies with MDA. Instead of developing software in iterations, the model is developed iteratively and can then be generated into executable code.

  • When using code generation frameworks, such as the open source tool AndroMDA, one can quickly build applications using existing code generators. A simple UML domain diagram can immediately be translated into Spring MVC controllers, domain objects, Hibernate mappings, and much more.

  • When the software spans multiple sub-systems, MDA nicely enforces the correct translation of the model across different technologies used in each one of these systems. While I prefer writing generic code to duplicating code via code generation, this isn’t always feasible (e.g. for XML configuration files or TeamSite CMS data capture templates). In MDD changes to the model can instantly be translated into multiple code artifacts using different technologies by the push of a button.

Yet I also see a number of serious issues with the OMG’s vision:

  • As an “agilist” at heart I strongly oppose the idea of spending excessive time modeling software in great detail such as highly granular UML diagrams. Software is meant to be code, not a myriad of UML diagrams which are modeled without an in-depth understanding of the features and limitations of the underlying frameworks. I value the use of UML as a pictorial language, especially when illustrating concepts either on a white board or in documentation. But not when used in a strong forward-engineering paradigm.

  • MDA reduces the application of a particular technology or framework to a simply technicality, i.e. the creation of a platform definition model. Yet building applications efficiently heavily relies on the capabilities and limitation of the underlying frameworks.

  • Code generation is equivalent to duplicating a code template using the model as an input. However, I generally prefer writing generic, re-usable code to unnecessary duplication. The benefits of generic code are obvious, not only is the application smaller, but debugging and maintaining the code is by far easier. Generated code would force you to debug the same piece of logic in many places of the application and fixing it requires changing the code generator templates and ultimately re-generating the entire application.

  • Building a platform dependent model, i.e. the code generator, for an entire application can be a huge undertaking. On the upside, many vendors and open source technologies, such as AndroMDA, ship with a variety of pre-built cartridges. However, by using existing code-generators one reduces implementation flexibility as well as maintainability of the application. Debugging and fixing issues in these pre-built code generators can be tedious and easily be overwritten by the next release of the generator. Further, generic code generators tend to be quite complex due to the fact that they have to be very generic.

  • When building web applications, I usually like to encourage my teams to push the boundaries and leverage the latest technologies available. Using existing code generation frameworks obviously won’t leverage the bleeding edge of technology, forcing you to write your own.

Leveraging the Key Tenets of MDD

While I argue that in its pure form, that being the notion of building an entire application using this paradigm, MDD is not my first choice, I would also argue that it has an obvious allure to it. Writing generic code is not always an option as all modern frameworks require configuration, plumbing code, mapping directives, etc. This is exactly where the code generation aspect comes to fruition. Given a central domain model, many artifacts surrounding a domain object can be automatically generated.

A major objection I am confronted with often is whether this approach lacks flexibility as the code is generated according to the same pattern every time. My argument is that this is actually an advantage for the majority of every application. Of course the code generation framework needs to be able to handle special situations where the generic functionality needs to be extended.

Let’s consider this by an example. A software team is integrating an XML-based content management system with a web application. The CMS team is responsible for defining the content input forms in the CMS which are used by the end user to create the XML. The application team writes a parsing layer which parses the XML into domain objects and a web application on top of it. After the teams agree on a content model, i.e. the structure of the XML files, both teams can start implementing all necessary coding artifacts.

[caption id=“attachment_309” align=“aligncenter” width=“259” caption=“Sample Application”]Sample Application[/caption]

However, since all artifacts are developed manually, when integrating the pieces, the teams will encounter a number of bugs which are a result of the two separate systems relying on the same underlying domain model. Further, different bugs are most likely to be found each one of the content forms and the associated parsing layer because different developers make different mistakes.

Consistency may be another issue. Especially when multiple developers are working on the individual functionality, each usually adds their own spin to the code. Some of date fields in the CMS forms may have calendar buttons next to date fields, some may not. Some developers might use camel case, another one may not.

Of course both of these issues can be addressed by establishing sound coding conventions as well as doing impeccable up-front design of all the sub-systems. But reality shows this is rarely the case. Especially when reacting to changes during the development cycle, such as the web application noticing that they need additional fields in the CMS, the original design efforts are often neglected.

[caption id=“attachment_311” align=“aligncenter” width=“325” caption=“MDD with Code Generator”]MDD with Code Generator[/caption]

Consider the alternative which is more aligned with the MDD paradigm. The teams agree on a domain model and then build a vertical slice, i.e. a functional prototype of the system through all defined layers. Then, using this prototype, the teams build a code generator which takes the domain model as an input and automatically generates the CMS forms and application-level parsing layer for the resulting XML files. The domain model is then fed into the code generator and the application is automatically generated. The code generator automatically enforces consistency. If any bugs where to be encountered which are a result of the coupling of the two systems, the code generator would have to be changed and the application re-generated.

In addition to the maintenance and consistency advantages, the teams also saved time. In the traditional approach, each time had to manually build the application logic for each entity in the domain model in each of the participating subsystems. In the MDD scenario, the teams built a vertical slice prototype and then translated that into a code generator – which automatically generated the application.


Especially when building web applications a strong alternative to MDD is the use of Rails-like frameworks, such as Ruby on Rails, Grails, Monorail, and many others. The underlying core ideas are aligned with what I consider the key advantages of MDD:

  • Don’t repeat yourself (DRY): Instead of repeating yourself, write code once and then have the framework take care of creating the plumbing code. This is typically done under the hood by the framework.

  • Conventions over Configuration (CoC): Rather than having every aspect of the application be built differently, establish sound conventions and use these throughout the software to ensure consistency and eliminate unnecessary (and unmaintainable) bloated configuration files.

In essence, these frameworks try to solve the same underlying problem. Yet Rails frameworks focus on building web applications within the same technology stack. For simple (web) projects which are easily contained into one logical application, not spreading across multiple software systems, any Rails framework is an excellent way of building an application quickly and iteratively. Once an architecture spans multiple technologies, frameworks, or requires custom coding using proprietary products (such as a CMS), MDD proves to be the big brother of Rails-like frameworks.


I have used both the Rails and the MDD approach throughout my career. I have introduced the light-weight MDD approach in a number of recent projects at Razorfish, which led to us building a jumpstart kit that in its first iteration lets us quickly bootstrap projects with Interwoven TeamSite and .NET as an application platform. This has not only saved us a lot of time, but also a lot of headaches and long nights of debugging code. We are able to quickly react to changes during the development cycle. Changes to the domain model can be made within the matter of the short time it takes to open a UML editor and re-run the code generator.

I consequently see MDD as a vital part of agile enterprise development and a complementing technology which picks up where a Rails framework hits its limits.

Microsoft adds new features to .NET

Microsoft has introduced new features in .NET with their Service Pack 1 (SP1) release of .NET Framework 3.5 and Visual Studio 2008.

Most of the stuff included in the service pack releases is new features and functionality rather than bug fixes and updates to existing feature-set. For example, .NET Framework 3.5 SP1 adds a new concept called the .NET Framework Client Profile, which enables an application to be delivered with just what is needed to install and run the app, rather than the whole framework. This can reduce the size of installation files by 86.5 percent, according a Microsoft spokesperson. Other major features in .NET Framework 3.5 SP1 include a 20 to 45 percent improvement in Windows Presentation Foundation (WPF) applications and changes to the Windows Communication Foundation (WCF) to change the way data and services are accessed.

The changes in the Visual Studio 2008 SP1 and .NET Framework 3.5 SP1 are listed here.