General Archive

"The cloud is the new normal" – Highlights from AWS re:Invent 2015


AWS re:Invent took place in Las Vegas the first week of October, and it has become one of the premier events in the technology industry, with close to 20,000 attendees and numerous exhibitors.

The conference was kicked off with a keynote from Andy Jassy, senior vice president of Amazon Web Services, who declared “The cloud is the new normal” to his audience. Starting off with presenting some impressive growth numbers, the announcements made in his sessions were focused on getting more enterprise customers to migrate to AWS, for example by simplifying the tasks of collecting and analysing data, streaming data, moving large amount data to cloud and migrating existing databases to different database management systems in the cloud.

The second keynote, presented by Amazon Web Services CTO Werner Vogels, was more focused on new development tools, as well as a new offering around the Internet of Things (IoT).

Here are the most significant announcements made at Re:Invent in various area: Database During the conference Jassy took aim at Oracle, the “Old Guard”, which is currently the biggest provider of traditional databases. “It’s rare that I meet an enterprise that isn’t looking to flee from their current database provider,” he said.

AWS strengthened their database offering by adding new features and compatibilities to the Relational Database Service (RDS). While announcing MariaDB as a fully-managed service on RDS it also introduced set of tools to make database migration to cloud simpler.


MariaDB is a fork of MySQL and is targeted towards developers who are running LAMP applications but are looking for an alternative to MySQL. AWS Database Migration Service intend to help enterprises to migrate their databases to low cost AWS RDS alternatives with minimal downtime. It supports all widely used database platforms, and performs schema and code conversion for migrations between database engines.


Company added another service to their enterprise portfolio, Quick Sight. It is a business intelligence tool, which has been developed to compete with IBM’s Congos, Microsoft’s Power BI, etc. Quick Sight makes it easy to build visualisations, perform ad-hoc analysis, and quickly get business insights from data and designed to be easy for non-technical people to use.

Amazon also launched Amazon Kinesis Firehose which is the easy way to load streaming data into AWS. It can capture and automatically load streaming data into Amazon S3 and Amazon Redshift, enabling near real-time analytics with existing business intelligence tools. It is a fully managed service.


AWS IoT is one of the major announcement from Amazon. It’s a managed cloud platform which will allow customers to connect and manage billions of devices and enable them to process, analyse and act on the data. It leverages the MQTT protocol, is integrated with a large number of IoT devices already, and allows for a rules based management of these endpoints. With a built-in shadow state mechanism, it makes interacting with occasionally connected devices much simpler.



AWS Mobile Hub is a mobile back-end as a service and which allows iOS and Android developers to easily add commonly used features including user authentication, data storage, backend logic, push notifications, content delivery, and analytics.


For most enterprises security is a primary concerns when moving to a cloud infrastructure. AWS announced new services like Web Application Firewall (WAF) and AWS Inspector to help administrators boost security of their infrastructure.

AWS WAF is a web application firewall that helps protect web applications from attacks by allowing Web Administrators to configure rules that allow, block, or monitor (count) web requests based on conditions that you define.

Amazon Inspector is an automated security assessment service that helps minimize the likelihood of introducing security or compliance issues when deploying applications on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, it produces a detailed report with prioritized steps for remediation.


AWS Lambda was launched at re:Invent last year and it has quickly become one of most popular services on AWS. Lambda allows to easily build server-less systems that need no administration and can scale to handle a very large number of requests. This year AWS made some significant enhancement by allowing to develop AWS Lambda function code using Python, maintain multiple versions of function code, invoke code on a regular schedule, run functions for up to five minutes and, VPC support.

Data Transfer

AWS unveiled a new PC-Tower-sized storage appliance called “Snowball” which makes easy for large enterprise customers to transfer petabyte’s of data into the cloud. Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Each Snowball device can hold up to 50 terabytes and can be shipped directly to the company for uploading to AWS servers.



AWS announced the AWS EC2 Container Registry, allowing you to manage all your container resources on AWS, and manage access through IAM.



Maintaining growth on a scale like AWS is difficult, but with the announcements made at reinvent, AWS is well positioned to gaining further adoption with enterprise customers.

In addition, AWS talks about re:Invent as an educational event, and they were very successful in achieving this in 2015. The sessions we attended were of a very high quality. The sessions are also posted online at and

Highlights from the 2015 Razorfish Tech Summit


Today marked the seventh iteration of the Razorfish Tech Summit, which brought together more than 200 technology industry executives in New York City to discuss software innovation. The two-day, invitation-only event was collaboratively sponsored and created with Adobe, Amazon Web Services, Appirio, Dynatrace, hybris, IBM, Rackspace, and Yahoo.

The Lab at the Tech Summit included hands-on demonstrations, in partnership with NYC Media Lab, which gave attendees the chance to experience technology projects produced by students from schools including Parsons, New York University and Columbia University. The students were tasked with inspiring the next wave of technology and were given the opportunity to demonstrate their projects alongside installations from Razorfish Global, Rosetta, Dynatrace and Hybris. Popular demonstrations included:

GlassMouse, which invigorates streetscape and mallscape windows with new invisible powers of interactivity; Flatline, a visualization of an individual’s social media presence as an electrocardiograph; and Pure, a speculative wearable piece commenting on the growing issue of pollution and functionality of clothing. This project is a response to the lack of concern for air quality and the impracticality of the modern fashion industry.

Along with the interactive demonstration area, a lineup of panel discussions inspired attendees throughout the day. The event kicked off with an opening talk from Razorfish Global Chief Technology Officer Ray Velez, who encouraged executives to “build experiences that are timely, relevant and smart.” Velez mentioned Marc Andreessen’s quote “software is eating the world” to highlight the ways in which technology is creating an environment that is bringing together physical and digital experiences. Despite the changes in technology, Velez urged: “Consumers are more in charge than ever before. They are expecting services that are built just for them.” Velez emphasized how machine learning, cloud, and big data technology are critical in putting the customer at the center of their business endeavors.

The consumer experience was at the heart of the first presentation of the morning led by Piers Fawkes, who is the founder and editor-in-chief of and PSFK Labs. Fawkes gave the audience a preview of his “Future of Retail” report, noting the innovative ways retailers and brands are using technology to connect with consumers. He explained that the new shopping experience revolves around ten pillars, ranging from creating confidence among shoppers to delivering a delightful, unique retail experience.

Next, Dr. Steven Abrams, the director of Watson Ecosystem Technology at IBM, took the stage to explore cognitive computing and how humans and machines are collaborating in surprising ways. “It is not man versus machine now,” Dr. Abrams said. “It is man working together with machine.”

He cited various examples of how cognitive computing might work in different industries, whether it is a chef using a collaborative partner for recipe ideas or a physician working to better diagnose patients by analyzing the medical records of those who experienced similar symptoms. A video shown during the presentation also showcased the work of Elemental Path in New York City, which is using IBM’s Watson technology to develop connected toys to aid children in learning.

Following Dr. Abrams’ presentation, Razorfish leaders Christopher Follett (Executive Creative Director) and Eric Campdoras (Creative Technology Director) encouraged the audience to view the retail environment in a new way. Their demonstration of “Talk & Shop,” which involves a mix of iterating prototyping, artificial intelligence, synchronization and voice capabilities, showcased how mobile can enhance a consumer’s shopping experience.

And as today’s consumer is very much influenced by mobile, David Iudica, Yahoo’s director of strategic insights and research, revealed findings from a global study conducted with Flurry Analytics with 6,000 smartphone users.

Highlights of the study include:

  • People are adopting smartphones quickly due to the multiple products it has replaced

  • More efficiency and a better experience will lead to increased smartphone usage

  • Phablets are the fastest growing device; as screen sizes grow, engagement will increase

  • In the U.S., people spend three hours and 40 minutes a day on mobile devices

  • Native mobile advertisements on premium earn three times more attention from consumers than the static banner ad

Following this deep dive into smartphone usage, Andrew McAfee, principal research scientist of MIT Sloan School of Management, shared how machines and humans can work together, stressing people still have the upper hand. “Our brains and our minds clearly do remain valuable even in a world of advanced technology,” McAfee said. We all felt reassured when he guided us to a recent report, “The Growing Importance of Social Skills in the Labor Market.” He also encouraged the audience to “be more geeky” and to embrace data to make better decisions. The team appreciated hearing the geeks (data-driven decisions) vs. HiPPOs (highest paid person’s opinion) debate.

Amyx+McKinsey Founder Scott Amyx, Founder & CEO then discussed how wearables and the Internet of Things can provide new capabilities that impact the brand engagement cycle. He shared examples from retail and omni-channel marketing that showcase how to engage consumers in new ways. He also explained how wearables and the Internet of Things can enable better experiences by measuring emotion. One statistic he shared that surprised the crowd was 55% of what we as people communicate is done via body language.

After Amyx’s talk, David Nuescheler, the VP of Enterprise Technology at Adobe Systems Incorporated, delivered a presentation entitled “Technology Paradigm Shifts.” He explained why Adobe’s priorities in open, cloud, and mobile experiences are critical to their future success. He wrapped up with the new truism, “the only constant now is the acceleration of change.” RAZORFISH_SPEAKERS_1883

Next, NYC Media Lab Executive Director Justin Hendrix led a panel with the Razorfish Emerging Experiences team including, Steve Dawson, Luke Hamilton, Charles Fletcher and Kat McCluskey. The group shared about the future of augmented and virtual reality. “These technologies are beginning to create a lot of buzz,” Hendrix said, noting that $150 billion will be invested in the space by 2020. Sensor technology and 3-D design were cited as innovations in the category.


The discussion then transitioned to the NEXUS Customer Intelligence Marketing Operating System (MOS). Panelists Samih Fadli, Chief Intelligence Officer, Razorfish Global; Dmitri Tchikatilov, Global Business Development Executive for Digital Advertising at Amazon Web Services; and Tom Kotlarek, Executive Vice President, Technology Razorfish Global, shared the ways data can be leveraged to drive targeted customer experiences and boost engagement. With incredibly powerful big data technology, enabled through cloud computing, Nexus has a great answer to the oncoming deluge of data coming from the greater and greater proliferation of data. Nexus has a vision to use data to listen to customers enabling breakthrough experiences.

Visa’s Senior Vice President Global Head of Digital & Marketing Transformation Shiv Singh—our former Razorfish colleague— delivered a dynamic presentation, “Five Ideas for Marketing Transformation.” Singh says this kind of marketing revolves around open source brand building, transmedia storytelling, ubiquitous engagement, being anchored in owned media experiences that mirror the model of a publisher or magazine, and adopting a marketing checklist that takes full advantage of digital. “Software is becoming culture,” Singh said. “Technology is culture—it is life.”

Hilary Mason, the founder and CEO of Fast Forward Labs, which advises brands on data strategy, closed the Tech Summit with a look at data science. Mason presented on “Innovation Through Data,” exploring how companies like Reddit, Foursquare and Google Maps are utilizing data effectively. She mentioned that companies of this nature are driven by data due to access to sophisticated computation, the ability to collect and store it, and being adept at analyzing it in an efficient manner. Mason also touched on machine learning. “We are at the beginning of machine learning applications,” she said. “We will see machines that interface with us.” Check out her recent book, Data Driven: Creating a Data Culture, on how to use data to help drive your business.

Hands down, it was our best Tech Summit yet. Watch this space—in the coming days we will publish another post that contains links to many of the presentations.

Razorfish Global Technology Summit 2015

The seventh iteration of the Razorfish Global Tech Summit explores an element that’s crucial to the success of any business today: software innovation. Venture capitalist Marc Andreessen once stated in The Wall Street Journal, “software is eating the world,” and this is certainly the truth today. In today’s tech-driven world, businesses thrive only by embracing change and being willing to rethink their business models when the time calls for it. Software is at the root of this level of innovation.

More than 200 executives across the tech industry will gather in New York City for a premier conference that explores the theme, “Business Transformation Through Software Innovation.” From leveraging data and analytics to engage consumers in meaningful ways to developing next-generation Internet of Things experiences around their interests, software is at the heart of every brand that’s driven by innovation.

Leading technologists will gather at this special conference to share knowledge about the software innovations that have changed the way they do business. Folks ranging from Andrew McAfee to Hilary Mason will discuss technologies that have disrupted and future technologies that will disrupt.  Attendees will learn how software and digital experiences can enable a company to operate more efficiently, grow revenue and provide greater value for both consumers and employees at the right time. In a world where the consumer is in charge, software innovation is the only way to reach your customer. From engaging panels to hands-on demos and experiences, the Razorfish Global Tech Summit is a conference tech industry professionals—and thought leaders—can’t miss.

We will wrap up the event with a series of hands-on workshops for our clients and teams to help drive the first steps to reaching the strategic path put forward by the visionaries during the day 1 presentations.

Currently, this event is invite-only. If you have any questions related to tech summit, or would like an invite please contact

Austin Razorfish Techies Help To Empower Girls in STEM


Girlstart, an organization whose mission is to empower girls in Math, Science and Engineering, hosted their 10th annual STEM Conference in Austin on April 11, 2015. Over 600 girls from 3rd-8th grade registered for the all-day event where 26 hands-on workshops were led by over 100 STEM professionals from companies including Razorfish, Cisco, Emerson, Dell, 3M, Thermo Fisher, Xerox, Applied Materials, Texas Gas Service, Electronic Arts, and more. There were also 100+ volunteers from companies including Dell, Samsung, Intel, IBM, VISA and Farmers Insurance to help make the day a success.

Razorfish led 2 workshops called “Girl Code” where we introduced the girls to the wonderful world of coding, design, UX, and copywriting through the process of building a website. We started by putting our content into HTML code and had the girls give us ideas to make it ‘prettier’ before introducing them to CSS. The class compared a design comp and the work in progress to find differences using a word bank of CSS terms. They then helped us during a live coding portion to customize the site how they wanted after we addressed the differences they found between the design comp and the live site. Overall, the girls left feeling inspired and excited about so many potential career paths in STEM.

Razorfish participants: Britney Jo Ludkowski, Camille Church, Hillary Oneslager, Jaime Sporl, Anna Lepine, and Jessica Grantham.

Girlstart created a highlight video of the day:

Pictures from Razorfish’s workshop can be found here: ``

Essilor of America – a Razorfish, Adobe and AWS Case Study


In 2014, after a comprehensive RFP and review process, Essilor of America selected Razorfish to be a Digital Agency Partner for its consumer brands, including but not limited to Varilux® , Crizal® and Xperio UV™ lenses. Less than a year later, we have successfully launched all three redesigned brand sites on the Essilor of America new digital marketing platform.

The Challenge

Essilor of America was looking for a new creative design and experience as well as a digital marketing platform that could support the Varilux brand experience, but also incorporate Crizal, Xperio UV, and potentially other brand experiences in the future. The goal was for consumers to have a common experience across Essilor digital properties and have a extensible digital marketing platform to support it. Razorfish has developed a number of multi-brand platforms for other clients, including a productized digital marketing platform built around Adobe technologies, Fluent.

Essilor of America’s digital properties were all hosted and maintained by corporate IT and hosted outside the US. As such, new projects, redesign and/or updates required significant coordination with the international team. The existing corporate content management system was also not equipped to meet the needs of the business.


Why Adobe Experience Manager (AEM) and Amazon Web Services (AWS)

Razorfish and Essilor of America evaluated a set of content management platforms against the current and future business requirements including but not limited to: cost, breadth of features (target and personalization, digital asset management, campaign management, multi-site and multi-language support), out of the box integrations with other marketing tools, open source foundation and standards based support, reporting capabilities and industry ranking. Razorfish and Essilor of America also evaluated a set of cloud hosting vendors against a set of business and IT requirements including but not limited to ability to scale, cost, time to market, deployment agility, technology breadth, security, disaster recovery support and innovation. In the end, Essilor of America selected Adobe Experience Manager hosted on Amazon Web Services as its Digital Marketing Platform and Infrastructure. Razorfish helped Essilor of America design and establish a new infrastructure to support the new digital marketing platform. With the expectation that additional brands share the platform in the future, AWS allowed us to design and perform capacity planning for the current needs, with the ability to scale easily for seasonal peak times and for future needs. For high availability, we built a multi-server platform that spans availability zones within the region. While Disaster Recovery is not currently a requirement, utilizing Razorfish Fluent AWS Cloud Formation templates, AEM scripts and backup data and snapshots saved in Amazon Simple Storage Service (Amazon S3) will allow us to stand up a complete AEM stack within minutes, if needed. As the solution was intended to support both consumers and eyecare professionals, Razorfish, AWS and Essilor of America worked closely together to address current and future security requirements. We utilized Amazon Virtual Private Cloud (VPC) to isolate each environment from each other and to add multiple layers of security to supplement security groups and network access control lists. A private facing subnet was created for the AEM author and publishing server instances with only the web server/dispatcher server available in the public-facing subnet. From an application perspective, we used AEM User and Group Management features to create groups and permissions for each brand site to provide controlled and appropriate access to brand managers and their agencies.


On the experience front, Razorfish performed a full redesign of the Varilux, Crizal and Xperio UV brand websites, delivering bespoke and fully responsive and mobile-friendly experiences that stay true to the brand while utilizing shared and reusable templates and components built on Adobe Experience Manager (AEM). AEM’s Multi Site Manager (MSM) module and blue print capability allowed us to build a site structure that supported multiple brands in multiple languages. AEM’s out of the box Digital Asset Manager (DAM) was ideal for creating a library of assets for the brands that can be shared as needed. Overall, AEM empowered the digital marketing teams and their agencies to own content creation, modification and activation in a timely and efficient manner.

Timing and Results

By partnering with Razorfish for both creative design and technical execution, and leveraging our experience in building AEM solutions on AWS, we were able to accelerate the design, build and deployment of the newly redesigned Varilux USA site within four months. Even with a two month lull after the Varilux USA release, Crizal USA successfully launched in January 2015 and Xperio UV immediately after in March 2015, completing the three main consumer brands in less than a year.  Both Crizal and Xperio UV have been redesigned and launched on the same digital marketing platform, building on top of and benefitting from the infrastructure, site architecture, templates, components and services already built for Varilux. Currently, additional experiences and microsites leveraging the same platform are in progress.


Varilux USA – Crizal USA – Xperio UV USA –

About Essilor

Essilor is the leading manufacturer of optical lenses in the United States and is the market leader in progressive, high-index, photochromic and anti-reflective coated lenses. A pioneer in the development and production of ophthalmic lenses, Essilor employs more than 10,000 people throughout North America. Essilor manufactures optical lenses under the VARILUX®, CRIZAL®, TRANSITIONS®, XPERIO UV™, DEFINITY®, THIN&LITE® and other Essilor brand names. Essilor Laboratories of America, Inc. is the largest, and most trusted, optical lab network in the U.S. and offers a wide choice of services and lens brands, including Essilor premium lenses, to eyecare professionals across the nation. Essilor of America, Inc. (Essilor) is a subsidiary of Paris-based Essilor International, a publicly held company traded on the Euronext Paris stock exchange (Reuters: ESSI.PA).

The Journey From Recruiter To Coder

When Anthony asked me to write this blog post I didn’t know where to start… My name is Greg Pfaff and I work for Razorfish as a Senior Recruiter. For 8 years I have recruited the best software engineers I could find for my clients, the last 4 have been for Razorfish and it’s been a wonderful experience. During my time here I have learned a lot and have accomplished personal and professional new heights. Through lots of learning, Razorfish awarded me the fortunate opportunity to make the switch from the recruiting field to be a full-time Front-End Developer. I didn’t know how to start this article so I googled it, what I found were mostly articles about software engineers turning recruiter and not the other way around. So I’ll do my best and give it shot to give you my perspective.

Learning a new skill set and starting from scratch can be daunting but it can also be very rewarding. Talking to software engineers and web developers all day became a lens into a gigantic playground that I needed to be a part of. It doesn’t happen over night but here are the steps I took which helped me gain entry into the field. I started off with some nightly boot camp classes, this way I could keep my day job and familiarize myself with basic technologies. From that point, I started to leverage any online resource I could find. Setting up specific times and dates to make sure I was investing into my technology skill-set. Any question you might have, I am willing to bet there is someone who has solved it beforehand and can add an answer. To compliment the other steps I tried to interact with anyone who was more senior than I was. Whether it was asking development questions for potential recruits or pairing with programmers at Razorfish. Talk to as many people as you can! They are an invaluable resource into learning about technology and the beauty of the tech community, in general, is they’re always open to lending an ear and helping you solve a problem. One thing to keep in mind is that they were once in your shoes and know how hard it can be at times. The key is to keep on keeping on!

The journey will be long and arduous but I am extremely excited for what is ahead of me. I am leaving a field where I have spent the last 8 years and joining one where I am very green. The challenge that is ahead of me is exhilarating as I will be continuing to learn and I can’t wait to the see the possibilities!

by Greg Pfaff (recruiter turned coder)

The Success of Embracing AngularJS in a Large-Scale Enterprise Solution

by Jeff Chew (@therealjeffchew) - Technology @Razorfish

On March 17th, I gave a talk at the Google office in Chelsea describing how to build a scalable AngularJS project. The magnitude of this kept with me for weeks leading up to this event, seeing that AngularJS is a JavaScript framework created by Google themselves. And looking at the roster, there were a couple of hundred people signed up to come, with an additional 200+ in the waiting list.

Rewind over a year ago, I joined the Ford account at Razorfish understanding that I’d be taking on and leading the Presentation Layer charge of a massive revamp of the Ford and Lincoln owner portals ( /

* *

* *

Its predecessors were on a platform called Fatwire, which the client has decided to leave on the wayside for a re-platform to the Adobe Experience Manager (AEM), an enterprise level content management system (CMS) by Adobe. Part of this revamp, we took the opportunity to reevaluate the performance and how we could make things better. Where it landed, we opted to separate all of the logic into various layers instead of letting the server side do all of the heavy lifting.

In the new stack, we are allowing AEM to do only what it does best, which is to deliver content. We then extracted all of the SOAP operations from the server side, and exposed those as a RESTful API layer that can communicate with the presentation layer. What this means is that all of the business logic now falls on the hands of the front-end.

We selected AngularJS as the framework of choice. In addition, we embraced AngularJS as a whole, and used it as a pure solution rather than a mix of other frameworks or libraries (i.e. jQuery, etc). This allowed us to fully utilize its unit testing capabilities, as well as the ability to fully modularize the product we are building. To elaborate on the idea of modularization, AngularJS gives the power to separate functionality into smaller pieces that can then be put together wherever those pieces are needed. For example, if we are creating a hook into the service layer, we only need to write it once then place it in any section that needs to use that hook. This particularly came in handy, as the number of RESTful service integration points reached close to 90 by the time the site launched.Extending the idea of modularization further, I was tasked with also architecting two projects that came in during the year; Ford’s corporate site (

Extending the idea of modularization further, I was tasked with also architecting two projects that came in during the year; Ford’s corporate site ( and another property owned by Ford called Quicklane (

* *

* *

Even though these sites had different business needs, I went with the same architectural structure as the Ford and Lincoln owner portals, then kicked off the respective development teams. What we noticed was that we saw a lot of commonality across the various work-streams. We didn’t think it made sense to reinvent things like an accordion, carousel, or even how we handle responsive imagery. We modularized all of these as standalone components, which we could then pick up and place from one work stream to another with minimal integration time.

The Ford corporate and Quicklane sites launched successfully in Q4 2014, while the owner portals launched without a hitch at the end of Q1 2015.

We also saw some considerable performance gains during our performance and load tests. Some highlights to note compared to the old Ford/Lincoln owner portals:

  • Login & Authenticated Homepage: 50% faster @ 9x load
  • Registration: 130% faster @ 3x load
  • Upload Software Installation Log: 328% faster @ 100x load
  • Check Sync Software Status: 536% faster @ 6x load

Part of the exploration of architecting a scalable AngularJS solution, I quickly realized that the approach we used was not documented or spoken of anywhere in the tech community. I took this as an opportunity to share how we were able to successfully leverage AngularJS on an enterprise solution across multiple work streams.

I prepared my presentation hot at the heels of the most recent launch, which wasn’t much time. On top of that, I was scheduled to do a dry run of the presentation with Google a day after the launch date. Fortunately, our launch went so smoothly it gave me time to start thinking around how I wanted to tell our story to Google and the tech community at large.

The final story evolved into two separate tales. One is how to guide a new AngularJS developer from a basic project, to something that is scalable and can be used in a real life project. This was told with multiple sample projects and building from a baseline to a usable product. The other story was the problem we were faced with, and how we applied this guide to the various Ford projects that are live today.

All in all, I’m proud of the work that the Razorfish Ford team was able to accomplish, and even prouder of the fact that we could tell our story outside the walls of our office. The presentation at Google was well received, and was delivered to a packed room with people standing in the back.

The presentation was recorded, and will be available on Google’s AngularJS Youtube channel in the near future.

Yahoo’s first Mobile Developer Conference

Impressions from Yahoo’s first Mobile Developer Conference By Fred Welterlin and Grant Damron

Yahoo held their first Mobile Developer Conference in San Francisco last week. Overall we are most impressed and excited that Yahoo appears to be getting back to focusing on innovation (not content curation), as shown from some of the product feature launches. The day began with Marisa Meyer and Simon Khalaf clearly pinning Yahoo’s future as a “mobile first” oriented company. Note that in the context of Yahoo’s strategy, “mobile first” really means “apps first.” Khalaf in particular illustrated the current app usage revolution by cross examining analytics that suggest a huge exponential growth of app usage well into the future. Interestingly, while a handful of social apps (Facebook, etc) represent where consumers spend the majority of their time, Yahoo asserts that the largest growth areas (the long tail) will be elsewhere- specifically, shopping applications. Yahoo wants to position itself as a leader by providing the technologies that allow start-ups and existing businesses to grow mobile app based commerce and perhaps even leverage some of Yahoo’s content offerings.

Central to the “app first” strategy is Flurry (acquired by Yahoo this year), the premier mobile analytics tool. The rapid integration of the firm into Yahoo’s larger ecosystem has made it possible for a collection of new (and “free”) products to be developed and announced at the conference, named the Yahoo Mobile App Development Suite (featuring 6 tools that mostly support analytics and monetization). Of the new offerings, 2 analytics oriented tools caught our eyes:

  • “Explore” allows users to run custom queries on data in real time, generating high quality graphs within seconds. Think along the lines of Tableau, but free. A quick overview of the service’s architecture presented in the afternoon implied some fascinating innovations and, not surprisingly, that it is running Hadoop under the hood.

  • “Pulse” has a little ways to go- but has potential. It allow devs to send analytics data from the app to other services (reducing overhead - most noticeable in terms of network and battery usage). Only one service is currently integrated, limiting its immediate utility. It will be interesting to see what other analytics services get on board, down the road.

As for monetization, Yahoo has been a major player in targeted advertising for some time now, so it’s no surprise that their tools are designed to funnel data through their ad platform as much as possible (this is what makes it possible for Flurry to be free!). While not exactly new, the most significant ad-specific product unveiled was the Native Ads Service. In an attempt to move beyond the ecosystem of boring ad banners, the service provides applications with all the assets for a quality ad (copy, image, etc) while allowing the apps themselves to handle placement and presentation. This encourages more consistent integration with the host app’s content. Yahoo presented numbers that support an increase in conversion rates, at least within Yahoo’s own apps.

Our Take Away Clearly, Yahoo is looking to empower native app start-ups with “free” tools that provide them with measurement so that product refinement cycles can occur quickly, based on direct feedback. Coupled with seamless integration of advertising (for example, we noted beautifully integrated BrightRoll video ads within some app demos), and the potential to leverage Yahoo’s enormous reach with users and content (news, sports, etc), business developers for mobile applications have a nice set of tools to help them find an edge in the increasingly crowded yet still “wide open” mobile ecosystem. Show me the money!

Amazon Web Services – Bigger and bolder, but will it be the new normal?

A recap of this year’s edition of the AWS Re:Invent Conference

Contributed by Anoop Balakuntalam

Over the last few years, Amazon has gone from being a brand recognized as a leading commerce player to a brand that is also on the cusp of becoming a technology giant. With cloud operations in over 11 regions across the world, over a million active customers including 900 government agencies worldwide and over 500 significant feature & service launches in just this year, Amazon Web Services (AWS) has shown that it has the breadth and depth to be a leader. In the 2014 Cloud Infrastructure as a Service Magic Quadrant, Gartner rated AWS as having the furthest completeness of vision and highest ability to execute, and also observed that Amazon Web Services has 5 times the compute capacity in use than the aggregate total of the 14 other providers in the quadrant!


Number of significant new features and services from AWS

 At the recently concluded third edition of the AWS Re:Invent conference at Vegas, AWS further entrenched its position, demonstrated its lead and came off as much bolder in its messaging & offerings. The presence of over 13500 attendees from 53 countries gave testimony to the reach and size of Amazon Web Services. It would be safe to say that in the enterprise technology circles, the word Amazon now first evokes images of a computing & technology provider and then that of a commerce player. In some ways, AWS also feels like the new Apple with crowds eagerly waiting to hear about the new service launches (all held carefully in secret until the keynote) and then the (virtually) long queues to get access to the feature previews!


Razorfish on the SI slide – Day1 Keynote by Andy Jassy, SVP, AWS

One of the most pervasive ideas that AWS presented at the conference was the idea of “cloud as the new normal”. The idea that it is no longer a question of “if” or even “when” you should move to the cloud, but that if you are not already on the cloud you’re falling behind. In fact, AWS took its messaging one step further to talk about how several large enterprises have made the decision to go “all-in” with Amazon. Companies like Major League Baseball were invited to talk about their decision to go with AWS, several of whom made a reference to how it was a no-brainer to choose Amazon. At the end of it all, one might be forgiven for thinking that the message was that “AWS is the new normal”!

Although this year’s conference was largely about taking strides to change the course of technology and to make its presence felt in devops, AWS also continued wooing its enterprise customers. Philips Healthcare spoke about the massive petabyte scale of real-time streaming & compute they’re using to change the world of healthcare. AWS also talked about how security & compliance are now the reasons and not the blockers for cloud, which was followed by a presentation by Intuit to go with this message.

Several new service announcements were targeted at the enterprise buyer:

  • The AWS Key Management service for encryption key management & compliance, to bring easier management of keys providing greater visibility & control.

  • The AWS Config service was presented as an alternative to ageing ITIL toolsets for resource visibility, dependency tracking & configuration management.

  • The AWS Service Catalog to enable enterprises to provide discovery & provisioning of approved services on the cloud to its users via a custom catalog.

Some of the boldest moves from Amazon were its ventures into attempting to change some of the core fundamental building blocks in the software world. Two in particular stand out:

  • AWS Aurora – a new MySQL compatible database engine with enterprise grade performance, cloud grade scalability & fault tolerance and open-source grade pricing. AWS claims 5x the performance of MySQL at one-tenth the price of comparable commercial databases. This is the first time that Amazon has offered a core software service such as a cloud-grown relational database engine to the enterprise as a possible alternative to large well established commercial offerings.

  • AWS Lambda – a way to run highly available, highly parallel, event-driven code functions in the cloud without the need to manage any kind of infrastructure! This is clearly an attempt to redefine how software is built with new patterns that are entirely cloud-first.

These launches are massive leaps from the heretofore wrapping of existing software & paradigms in scalable automated cloud services, to defining completely new kinds of services and paradigms. All very bold moves and compelling, but concerns of lock-in lurk under the surface.

AWS also strengthened its presence in the world of devops with its share of new “agility is the holy grail” focused services:

  • AWS CodeDeploy – AWS made available to the rest of the world an avatar of its in-house code deployment service that currently enables an average 95 deployments every minute! AWS also announced the availability of two other Application Lifecycle Management tools in early 2015 – CodeCommit and CodePipeline. While CodeCommit seems to be in direct competition to GitHub allowing developers to host code closer to their AWS environments, CodePipeline seems to be the AWS native way to do Continuous Integration and Continuous Deployment.

  • AWS EC2 Container Service – With a large section of devops professionals expecting some kind of a container service supporting Docker, AWS would have a done a disservice to itself by not announcing one. The demo showed how containers are automatically scheduled across underlying heterogeneous infrastructure components. The topic of VMs vs containers will continue to be the rage this coming year.

Finally, AWS also made several overtures to propagate its view of how to think about, operate and run technology organizations. Several references were made to the three pronged AWS culture - customer focus, innovation and long term focus. Some enterprise patterns for technology adoption and governance were also discussed.

We were fortunate enough to hear Jeff Bezos speak and the insights he shared were pretty amazing. Considering the first couple of years of AWS many people were questioning the strategy. Right now however, it seems few are wondering. His advice that leaders need to counterbalance the ‘institutional no’ is something that AWS buyers have certainly lived. It was also interesting to hear from Steve Schmidt (CISO, AWS) who made several good points about managing security and urged his audience to make it easier for people to do the secure thing than to do something insecurely.

No developer, no project and no enterprise can afford to ignore the new forces and paradigms that Amazon brought to the table. Bringing the ability to run code without having to manage any infrastructure and having Intel develop a processor exclusively for Amazon, are clear indicators of an emerging giant. With the number of “all-in” migration clients presented at the conference and the repeated reinforcing of the message of “new normal”, AWS is still the biggest force to reckon with despite the heating competition. But as Gartner said, the race has just begun!


Our Weekend At The Salesforce $1 Million Hackathon

The Salesforce $1 Million Hackathon is an annual event organized by Salesforce at Silicon Valley. This year, it was held from October 10 – 12, 2014 at City View in Metreon, San Francisco, California (USA).

Contributed by Brajeshwar OinamIt was an electrifying feeling to be surrounded by developers and designers from all around the world. I have always wanted to be a part of such a big event. What excited me the most was the opportunity to meet like-minded people with diverse backgrounds and a variety of experience in different domains. Knowing what I could take back from this place was priceless.

The event consisted of four rounds. One of the important guidelines was to create an app using, Heroku, or with Heroku. We could form a team of up to 6 members with anyone interested. We tweeted looking for team members. The tweets were displayed on a giant display visible to all. I teamed up with a Brazilian and three non-resident Indians – 4 team members whom I had not met before.

First few minutes we interacted mostly involved knowing each other and our domains of expertise. It is very important to know each others’ strengths and weaknesses in order to assign roles and complement efficiency while working in a newly formed team.

The amalgamation of ideas and exchange of knowledge was enlightening. There were a host of things I taught them and a lot of things I learnt from them. After brainstorming, we decided to come up with an app that could boost efficiency and performance and at the same time reduce the loss of time in a business environment. Careful evaluations led us to a consensus that we need to combine two of time management’s well established principles and integrate it into our app.

The Eisenhower matrix which helps sort tasks on the basis of importance and urgency plus the Pomodoro technique that helps focus all our attention on a specific task with time restrictions to evaluate performance. We had to ensure this setup has exceptional user interface and works smoothly. We named our app ‘Simpledone’. The app we built could categorize and notify tasks on priority basis and have a timer to evaluate performance. The users could assign tasks and set its urgency and importance. The target user for the app was anyone with a busy lifestyle and who wants a focused execution with minimum effort.

We decided to go with a mobile first strategy. The app had an API-oriented architecture and was responsive. We used Ruby on Rails (RoR), CSS (powered by Sass), Javascript, HTML and PostgreSQL DB to develop the app, which would later be made Open Source. We made use of Heroku which saved us time and also increased efficiency while developing the app. The app uses an algorithm generated priority list and uses visual time boxing.

We received a very positive response from the judges on our User interface and User experience in the app with a rating of 63%. Majority of them were interested in knowing how the app could be integrated into existing project management tools like JIRA, Basecamp, etc.

The opportunity to work with random people having unknown capabilities and coming from distinct cultures was one of the best learning experiences. We made some great connections and have been regularly updating each other with new ideas and suggestions.

I also noticed a lot of people focusing on application development and design for wearable devices and an increased focus on Internet of Things (IoT). I feel these are going to be the next big things. I believe everyone interested in the field of development and design should participate in similar events for the exposure it provides and the kind of knowledge transfer that it results in – working in an environment that takes us out of our comfort zone really makes us think out of the box.

You can find the repository of ‘Simpledone’ on Github here.  We also made a video of our pitch for the app. Take a look at it here.

You could try out the app at:

More Info about the app available here.

Visit us at

The Razorfish Digital Platform Maturity Model

written by: Martin Jacobs (GVP, Technology)

Within our work, we often are building out the platforms for our clients to deliver on the premise of data driven marketing & commerce. In the current landscape, it is critical to deliver across all channels the right message, at the right time to the right person. To do so, it requires a multi-tiered data driven architecture, incorporating all data sources, integrated with the different technology tools and using them across all channels,


Core elements of Data Driven Marketing 1

The solutions needs to include all owned data sources, partner sources, as well as 3rd party data sources that can be relevant and improve accuracy of visitor identification. Through appropriate data management techniques, followed by visitor segmentation, experiences and messages can be targeted and delivered to the right audience.

As we are often are asked to assess our clients ability to effectively deliver on the premise of data driven marketing & commerce, we look at how well they are able to integrate the technology solutions and data sources listed above.

We have seen many different results, and to better qualify this, we developed a platform maturity model as shown in the diagram. The 6 tiers outline different scales ranging from very basic levels of maturity to advanced, automated multichannel (i.e. across all digital touch points) personalization & targeting.

This maturity model has been very helpful in not only in identify and assessing where our clients are from a capability perspective, it also has provided a good framework to help them reach the next levels in a staged and multi-phased approach.

However, we have seen that many clients have challenges to reach the next levels on their own in the desired time frame. This also was clearly reflected in the different studies we released in the past few years, including “The State of Always-On Marketing Study” and the 2012 Razorfish / Adobe Targeting Readiness Study. Also, see our recent announcement on leveraging Adode’s Audience Manager platform to help drive greater connectivity to services and data, eventually resulting in more client efficiencies.

We therefore created our own offering called Fluent. Fluent is turnkey SaaS solution, build upon Adobe software, using Amazon Web Services services extensively and incorporating the Razorfish expertise around creating and managing digital platforms.

We have been running client solutions on this platform for the past year, and in the next few blog posts I will delve into some more detail around this platform, and highlight some of the key design decisions we have made with Fluent.


Fluent – Razorfish’ SaaS Digital Marketing Platform

As mentioned in the previous blog post, Fluent is Publicis Groupe’s turnkey SaaS marketing platform. But what is a marketing platform?

From our vision, the marketing platform is the single integrated technology solution that provides the foundation to deliver a broad set of digital experiences, ranging from websites to kiosk experiences, from microsites to global multi language sites. And with this integrated platform, we are able to deliver these experiences in a connected manner, providing a consistent message to a visitor across all touch points.

The foundational building blocks of a platform are a:

  • A set of integrated services providing core data and functional services,
  • An infrastructure and software foundation
  • A set of organization specific application, template or component frameworks to deliver experiences
  • And a group of process defined to support the creating, management and evolution of the platform and platform components


Digital Platform Building Blocks 1

As highlighted in the diagram, we selected Amazon Web Services and Adobe as they key technology vendors for infrastructure & software.

Adobe’s key software offering around digital marketing platforms is Adobe Experience Manager, an industry leading solution for content & digital asset management and digital experience delivery. It has leading usability, and a rich site of capabilities to deliver complex global solutions. For those reasons alone, it obviously is already a very strong option to provide experience management functionality, but there are others in the marketplace as well.

The open nature of the Adobe product was another key driver for us to select AEM and associated products such as Target and Analytics. Built on open source technology such as Jackrabbit and industry standards such as Apache Sling and OSGI, it makes it easier for us to extend it and integrate it with many other solutions.

From an infrastructure perspective, there are many strong solutions in the market, and we work frequently with many of these. We selected Amazon Web Services due to its market leadership around innovation and offerings. It is global in nature and has many tools and options to deliver scalable & global solutions cost efficiently.

Its rich toolset helps us customize disaster recovery, reliability and availability for each of our clients to their needs very easily. It also helps us deliver marketing experiences in a very natural way, scaling up and out as traffic demands require it.

However, similar to Adobe, a key criterion for us has been to open and extensible nature of AWS. The different offerings have been clearly separated with an accessible API, allowing us to combine capabilities as simple building blocks into new configurations.


Amazon Web Services Capabilities 1

From a services perspective, both vendors are also leading in their space. Adobe has solution such as Target, Analytics, Media Optimizer and Audience Manager that are best in class around data driven marketing, and we are using these technologies frequently in our Fluent solutions.

However, we also like Amazon’s capabilities in this area. Not only do they provide unique differentiating tools such as Kinesis around real-time analytics, but AWS also provides many of the foundational offerings around workflow, messaging and queuing that allow us to integrate Fluent with many 3rd party solutions in a very fast and simple manner.

Thanks for joining us at the 2014 Razorfish Tech Summit

Over 200 attendees gathered at the Altman Building in NYC for two days of insightful presentations, hands-on workshops and networking.

We hope you found the Tech Summit inspiring and thought provoking. To help us improve on future summits, please take our short survey here or email us at

See you next year! The Razorfish team

If you would like to watch any of the presentations, please click below:

  • Ray Velez, Global Chief Technology Officer, Razorfish video slides

  • Keynote: Piers Fawkes, Founder and Editor-in-Chief, PSFK video slides

  • David Stover, Global Solution Management Lead – Mobile, Store, Pricing - hybris video slides

  • Martin Jacobs, GVP, Technology, Razorfish video slides

  • Rafi Jacoby, Director, Social Technologies, Razorfish video slides

  • John Cunningham, Chief Technology Officer, EMEA video slides

  • Shane Dewing, Senior Director, Product Management, Qualcomm Connected Experiences, Inc.slides

  • Peter Semmelhack, Founder and CEO, Bugs Labs video slides

  • Keynote: Roy Fielding, Senior Principal Scientist, Adobe slides

  • Chris Bowler, GVP, Social Media, Razorfish video slides

** Sponsored by:**

Tech Summit 2014, Sponsored by Tech Summit 2014

A message from one of our partners:

Can Watson make us more creative? The promise of computational creativity is to help us think outside the box, explore new white spaces and transform experience.

Researchers are experimenting with the next step in cognitive computing – going from making inferences about the world to generating new things the world has never seen before. They set out to explore the impact of using computational creativity in the culinary arts. I wonder what Watson will cook up next, maybe cognitive marketing.

Reach us at to discuss the endless possibilities of how IBM can help you engage buyers in highly relevant, interactive dialogues across digital, social and traditional marketing channels with the latest technologies..

Razorfish’s 6th Tech Summit!

We are only a few weeks away from Razorfish’s 6th Tech Summit!

The Internet of Things and sensor driven experiences are drastically changing the way people consume, transact, and generally interact with brands.

In its 6th year, the Tech Summit brings together more than 200 attendees and speakers for two days of insightful presentations, hands-on workshops and networking, all in one exciting place - New York City - where we’ll discuss how these changes impact your customer experiences.

Featured Sessions Include:

  • **PSFK Founder and Editor-in-Chief Piers Fawkes **will discuss the Internet of Things, exploring how a combination of ubiquitous computing and embedded sensors will bring an array of connected interactions and automated experiences to the world around us

  • Roy Fielding, creator of the REST specification and Senior Principal Scientist at Adobe, will talk about the future and past of managing content and services in this new era of devices and experiences

  • Shane Dewing, Senior Director, Product Management, Qualcomm Connected Experiences, Inc., will share how in the Internet of Everything (IoE), devices, systems and services connect in simple, transparent ways and interact seamlessly among devices across brands and sectors

  • **Peter Semmelhack, Founder and CEO of Bug Labs, **will share the amazing work his team is doing with and freeboard to help teams ideate and create with the Internet of Things in real time

Join us to boost your tech IQ, connect with old friends and meet your future partners.

Attendance at the Tech Summit is by invite only. Please contact your Razorfish rep or email us at for more information.

Building an IVR system in the cloud

Interactive Voice Response (IVR) systems offer a way for users to interact with existing software applications using voice and keypad input  from their phones.  Below is an exhaustive list of benefits that IVR systems offer.

  • Allow access to software systems through phones in addition to other interfaces like browsers & desktop clients

  • Self service systems reducing support staff

  • Systems that run 247

  • Systems that perform routing based on customer profile, context, etc.

The article will focus on how to build a flexible and extensible IVR  system painlessly using Cloud-based services like Twilio.

Twilio is a Cloud communications company offering IaaS (Infrastructure as a service). Twilio provides telephone infrastructure in the cloud and exposes them as Application Programming Interface (API) using which one can build applications to send and receive phone calls and text messages. Getting started with Twilio is easy.

  • Signup on

  • Buy a number

  • Program the number by connecting it to a HTTP/HTTPS URL. This is the URL that would be invoked when the number is dialed. The URL needs to respond with an XML output, called twiml, which is Twilio’s proprietary XML language. Using twiml,  developers can perform useful functions like playing a text message as speech, gathering data from callers using  keypad, recording  conversations, sending SMS, connecting the current call to any other phone number, etc.


Since the phone numbers can be programmed and controlled using any HTTP/HTTPS URLs, it’s easy to build interactive applications to handle incoming calls. The URLs can be static XML/twiml files or dynamic web applications that may be interacting with a database and other systems and performing any custom business logic.

 In addition, Twilio also provides REST APIs to perform functions like making a call, modifying a live call, collecting call logs, creating queues, buying numbers, sending SMS, etc. There are helper libraries available in all the popular programming languages which provide a wrapper to work with the REST APIs.

From: Khurshidali Shaikh - Razorfish India Team

New book from the Chief Marketing Technologist Blog author, Scott Brinker

We have long been a fan of Scott Brinker’s writing on his blog . His thinking has help drive a bridge between marketing and technology, which aligns real well with the transformation we are seeing in the market place. His new (mini)book

A NEW BRAND OF MARKETING: The 7 Meta-Trends of Modern Marketing as a Technology-Powered Discipline

, free download here is a great read to help drive home why this bridge is required for modern marketing to consumers. Yes, as technologists here at Razorfish, you might expect us to say things like moving from rigid plans to agile iterations or from art and copy to code and data. It’s not just that those are exciting meta-trends for us, but it’s also what we are seeing consumers demand. What exciting about the technology disruption happening in marketing today is that it’s a world that’s putting the customer in charge. No matter how we slice and dice the different ways that marketings need to connect with their customers, it’s all about a relevant, interesting, and powerful respect and understanding for customers. We had an event with the NYC Media Lab more here and one of my favorite answers on the panel that night was from Carl Schulenburg, founder at Oomolo. When asked, what do you think is the future of mobile marketing and his response was a one to one relationship with customers. We use the context of what their mobile or other device is telling us and provide relevant, contextual, useful messages and services on a one to one basis. That’s the future.

2014 Fluent Conference Recap

Razorfish presentation layer engineers from around the U.S. recently converged on San Francisco for the 2014 Fluent Conference (the third instance of O’Reilly’s annual web conference on HTML5, JavaScript, and other web technologies). This year’s event marked several themes woven into the working sessions, presentations, and general thought leadership. Two that stand out in my mind were:

Tooling, Automation,** and Collaboration**

Hidden within the many sessions devoted to web production automation and collaboration, I discovered a few “debugging” oriented tools/techniques that caught my attention:

    • LiveReload: LiveReload is a browser plugin that allows code changes to apply without a page refresh (includes support for mobile devices). The tool also compiles abstraction layers for you (SASS/LESS, CoffeeScript, etc). Nice!
    • Ripple**: Apache Ripple (recently resurrected from the dead) is impressive, although not 100% bulletproof yet as a workflow/debugging tool (its still an emulator, after all). This Chrome based emulator is designed for Apache Cordova/PhoneGap debugging purposes, similar to the modern browser’s typical “web developer” tools. Check out the Accelerometer feature![fluent_image](/uploads/2014/04/fluent_image-300x246.jpg) **
    • GitHub**: **everyone’s favorite open source collaboration GIT repo is, of course, GitHub, which provided some insight into their integration of Git “pull requests” into their main web UI.

Is there a Frameworks War?

As JavaScript continues its relentless march towards being the defacto language of the web, and applications continue to grow in complexity, so has the proliferation of libraries and (so-called) “frameworks” to address the needs, particularly on the presentation layer. Although the comparisons between Ember and Angular were prevalent and suggestive of an existing “frameworks war,” the overall tone at the conference was deliberate in accentuating that various approaches can be accommodated, based on business needs.

Here is the most interesting part. Angular and Ember were well-represented entities at Fluent 2014. Each is a “framework” in its own right, with something to offer (and something it doesn’t).

Ember is closest to the tradition definition of “framework,” in that it is a specific toolbox for “building large, maintainable applications” and is designed with standard best practices in mind (in regards to templates, routing, models, etc.). The expectation (for success) is that developers will accept and follow its structural conventions, especially for Ember’s sweet spot: larger multi-page/navigational applications.

Angular is more of a toolbox for building your own framework. Core features such as modularity and dependency injection usually make it easier to test. The tool’s flexibility in defining the appropriate architecture for the particular need has obvious benefits, but often leads to the infamous “you’re doing it wrong” argument by competing philosophies. Hilarious. Suffice it to say that Angular has the larger mind share at this point, and is suited for smaller applications that won’t grow beyond their original design.

Author: Fred Welterlin Presentation Layer Technology Director -

Razorfish Tech Summit 2014

As the internet of things becomes more ubiquitous, it drastically changes the way people consume and interact with brands. We are going well beyond just the mechanics of who owns the glass to using data to drive predictive delightful experiences. Long-gone are the days of interrupt based user interactions.  The future will be relevant next generation experiences powered by amazing enterprise content and commerce platforms. Fortunately we have amazing communities and growth in technology enablement from movements like the maker movement and the hackathon movement to help us meet consumers on their own terms.

Join Razorfish and industry thought leaders for two exciting days of keynotes, panels, case studies and workshops where we’ll explore how these changes impact your customer experiences.

Stay tuned to for more updated speaker and agenda info as it becomes available.

Attendance at the Tech Summit is by invite only. Please contact your Razorfish rep or email us at for more information.


As Razorfish continues to scale globally, we have taken this opportunity to ensure we collectively share Razorfish Technology Team’s learnings and points of view. As part of our broader technology network outreach and connection we are rebooting our community blog. This blog will have many contributors across our global network, led by the CTO team helping to surface the most important technology trends impacting the work we do for our clients. We believe it’s our responsibility to share our learnings and enable feedback from the broader global community. Please add us to your twitter, RSS feeds, etc. Looking forward to sharing and hearing from the community.

Our Top 5 Insights from CES. Chris, Jason, Jeremy, and Ray

From our perspective, iterative improvement and innovation was the theme of CES 2014. While many of the biggest platforms previously introduced at CES are now driving their own conferences (e.g., Apple or Mobile World), that doesn’t mean that this year’s CES was any smaller in terms of importance or scale –  in fact, auto now fills much of the void left by these two major vacancies. Overall, it was still exciting to see the numerous incremental improvements and, to some degree, validation of exciting technologies that were already on our radar. Notable examples this year included Mercedes, which delivered on integration with the now mature iteration of the Pebble watch, and CES winner Razor Nabu, a new better type of wearable. CES is certainly an indicator of market opportunity, and with the new number of wearables at CES 2014, it seems likely we can expect more wearables adoption by consumers.

Here are our top five examples of iterative improvement and innovation at CES 2014:

  1. Internet of everything. “Anything that can have a sensor, will have a sensor,” says Chris Bowler. And we saw a wide array of examples of this, from baby clothes providing important health data to the proximity sensor on your nest turning down the heat in your house when you leave for work. Cisco at CES imagined a fully networked future. Qualcomm, meanwhile, was driving the theme of context-driven interactions, which is really the strength of sensors and timing. Near-term implications: Some rumors include live kiosks across NYC with a geofenced/proximity based ad network. “This will move from reactive to predictive with passive experiences,” says Jeremy Lockhorn.

  2. Technology wearables and fitness. There were endless rows of wearable tech, some trying to get us more fit and better notified, with the best including the award-winning Razor Nabu wearable fitness band, Audi cars that track your health, and a xboxone that can derive your pulse from video. Near-term implications: The number of “me toos” here show that a lot of organizations think there is room to grow in the market.

  3. Smart TV and co-browsing: Operating systems are getting better and better in TVs, though it’s still anyone’s game as to what consumers will gravitate. LG’s web is clearly a big evolution driving contextual sports content or extrapolating show metadata to display relevant ads in real-time. With more and more time shifting and people skipping ads, this seems like a logical, but unfortunate step for non-cord cutters. Near-term implications: “Samsung’s Super Bowl ads will drive experiences synchronized with the content or it will be synchronized second screen experiences”, says Jason Goldberg.

  4. Device diversity: Tablets and laptops are increasingly looking the same – in some cases, we won’t be able to tell the difference between them. All the PCs are running windows 8 with new browser paradigms, like snap mode, etc., and smart TVs have different browsers with resolutions. Near-term implications:  With so many devices with different interations, we need a much cleverer system to publish across devices. Driving past responsive and adaptive design to more fluid publishing will be critical for the work we do at our clients.

  5. Auto tech: There was lots of news around the adoption of Android by Audi, GM, Audi, and Hyundai and the nearly formed Google and NVidia open automotive alliance. The vision is more than just Google Play apps in your infotainment, but also the connected vehicle. Imagine the car ahead of you slips on ice and warns your car to slow down ahead of the slippage. Near-term implications:  Take Mercedes’ announcement about integration with the new Pebble SteeI, for example. It won’t start your car, but it will warn you of upcoming hazards with a vibration, help you find your car in a parking lot, fuel status, etc.

Also of note:

  • This CES validated the power of Kickstarter and Indiegogo and their ability to drive real consumer success. There were 40 hardware devices funded by indiegogo and 30 current or completed Kickstarter projects.

  • Of course the next version of the amazing Occulus Rift had people blown away, now it supports motion with crisper imagery.

  • Things you might want to short from CES: The bluetooth toothbrush. Short the Bluetooth toothbrush and anything curved, but of course only time will tell.


Razorfish Technology 2012 Summary

The 2012 Razorfish Technology Summit was a huge success. With over 180 attendees from Razorfish, Clients, Vivaki brands, and industry thought leaders, it was our largest event to date.

Delta Air Lines executive, Bob Kupbens, kicked off the morning with an inspiring keynote on how big data enables better customer experiences.  Then our Chairman, Clark Kokich, started the afternoon by sharing his thoughts on the view from the corner office.  Throughout the day we enjoyed points of view ranging from the impact of Responsive Design to learnings from our work with Special K. Razorfish’s Rafi Jacobi closed the day with his insightful vision on the future technologies.

And since it all does really boil down to code, we added workshops so participants could get their hands dirty with Adobe and Rackspace, Amazon Web Services, Google’s App Engine and Razorfish Scrum for Teams.  A very special thanks to Stuart Thorne for inspiring the team to do some amazing work for charity in just three hours!

The event also coincided with the launch of the Razorfish 5 report. Check out the report here

Thanks to our Partners and the Razorfish team for pulling this great event together.  See below for links to the presentations and shareable videos. Also, see below for descriptions of the workshops, if you would like we can run these at your organization as well. Just reach out to for more information.

To help us improve on future summits, take our survey here . Feel free to reach out to your Razorfish contact, or send a note to for the details!

Until next year! The Razorfish team

If you would like to watch or share any of the presentations see below. Thursday, June 14th

These workshops are designed for groups of 15-20 certain workshops require specific software and pre-reads. Please reach out if you would like us to run them at your organization.

  • Workshop A - Scrum for teams: A hands on cross-disciplinary deep dive for how to apply scrum on your projects. – John Ewen, VP of Delivery

  • Workshop B - Razorfish Open Digital Services and Google AppEngine for rapid app development – Stuart Thorne, Experience Director

  • Workshop C - Using Amazon Web Services for rich and automated cloud hosting – Steve Morad (Amazon), Krish Kurrupath, Group Technology Director, and Ke Xu, Senior Technical Architect

  • Workshop D - Working with Rackspace and Adobe CQ to enable and cloud host powerful CMS web experiences - John Cunningham, Razorfish Europe CTO

Razorfish Technology Summit 2012!

Our 2012 Technology Summit is just around the corner!

The event is by invite only, so please reach out to your Razorfish contact, or send a note to for the details!

Here’s the agenda so far:

Thursday, June 14th

  • 7:30-8:45am :: Breakfast

  • 9:00-9:30am :: Welcome/Introduction - Ray Velez, Global Chief Technology Officer

  • 9:30-10:15am :: Keynote – Bob Kupbens, VP of Marketing and Digital Commerce, Delta Air Lines

  • 10:15-10:30am :: Break

  • 10:30-11:00am :: OmniChannel Commerce – Paul do Forno, SVP of Multi Channel Commerce and Kristen Flanagan, Senior Product Manager, Oracle

  • 11:00-11:30am :: The Evolution of Platforms – Drew Kurth, CEO, Fluent and Matt Comstock, VP of CIG

  • 11:30-12:00pm :: Emerging Experiences – James Ashley, Presentation Layer Architect and Jarrett Webb, Principal Developer

  • 12:00-1:00pm :: Lunch

  • 1:00-1:30pm :: Do or Die – Clark Kokich, Chairman

  • 1:30-2:00pm :: Developing for Responsive Design – Frederic Welterlin, Senior Presentation Layer Architect

  • 2:00-2:45pm :: Afternoon Keynote – John Mellor, VP Strategy and Business Development, Adobe

  • 2:45-3:00pm :: Break

  • 3:00-3:45pm :: Big Data panel – Moderated by Pradeep Ananthapadmanabhan, CTO of VivaKi’s Nerve Center

» Michael Howard- VP, Marketing, Greenplum » Dwight Merriman, CEO, 10gen » John Coppins, SVP-Product, Kognitio » Charlie Robbins, CEO, Nodejitsu » Florent de Gantes, Product Manager, Google

  • 3:45-4:15pm :: Multichannel Architectures, a Practical Case Study - SpecialK Design Your Plan – Gustav Hoffman, Global Director, Application Solutions, Kellogg; and Martin Jacobs, VP of Technology

  • 4:15-4:45pm :: The Year Ahead in Social Technologies – Rafi Jacoby, Director, Social Technologies

  • 4:45-5:00pm :: Closing - Ray Velez

  • 6:00-8:00pm :: Cocktail Party

Friday, June 15th

(Optional workshops—please RSVP to in advance.)

These workshops are designed for groups of 15-20 and will be working sessions; certain workshops require specific software and pre-reads. Please RSVP to receive more info.

  • Workshop A - Scrum for teams: A hands on cross-disciplinary deep dive for how to apply scrum on your projects. – John Ewen, VP of Delivery

  • Workshop B - Razorfish Open Digital Services and Google AppEngine for rapid app development – Stuart Thorne, Experience Director

  • Workshop C - Using Amazon Web Services for rich and automated cloud hosting – Steve Morad (Amazon), Krish Kurrupath, Group Technology Director, and Ke Xu, Senior Technical Architect

  • Workshop D - Working with Rackspace and Adobe CQ to enable and cloud host powerful CMS web experiences - Vasan Sundar, VP of Technology

Razorfish Named 2011 Adobe Global Partner of the Year

It’s really exciting to see Adobe bring together Omniture and CQ to help us build better experiences for our clients. We’ve been fortunate enough to work closely across our clients to help drive better experience. A big goal for us this year is to help our clients get more value out of their implementations by driving experiences that listen to what customers are telling us. Every digital interaction with a customer is an opportunity for us to learn their interests and unique asks as they interact with mobile, desktop, or any other channel experience.

The Adobe product suite is growing very rapidly, so spending time at an event like this helps us to understand the complimentary capabilities from Omniture through to CQ. Some of the highlights including

  • Adobe’s new predictive marketing technologies enabling the future of data driving digital experiences

  • Rapid integration of Adobe’s new tools from Context Optional and PhoneGap

  • The big push around data enabling all touch points from mobile and social all the way to traditional desktop

We are joining Google’s Cloud Transformation Program

Razorfish is super excited to join Google’s Cloud Transformation Program.  Our clients will benefit from the computing power, scalability, and security of Google’s cloud services, as well as Razorfish’s ability to deliver quickly and iteratively. As part of the Cloud Transformation Program, Razorfish will focus on helping our large enterprise customers build custom web applications and analytics tools using Google App Engine and Google Prediction API.

At Razorfish we provide our clients with cloud-based tools to create experiences that build their business. We’re proud to be a part of Google’s Cloud Transformation Program, which highlights Google partners who have expertise and a proven track record of success helping businesses make the most of their IT investments. As a member of the Cloud Transformation Program, we will train our employees on Google’s cloud services, with support from Google.

“We’re very happy to have Razorfish as part of the Cloud Transformation Program,” said Rahul Sood, Google’s’s Global Head of Enterprise Partnerships. “We’re excited to work with Razorfish to help enterprise customers build customized web applications and predictive analytics solutions—all hosted on Google’s cloud infrastructure. Razorfish has a great record of helping businesses make smart IT investments.”

Highlights from Razorfish’s annual tech summit!

Back in April, Atlanta hosted the fifth annual Razorfish technology summit. We explored how Gesture, Mobile, and the Cloud technologies are enabling a new digital reality for consumers as well as the enterprise. The concept of a cloud-enabled app store is a revolutionary new way to deliver software to the masses across devices and platforms. Enabled by high-speed mobile connections, powerful devices and increasingly low barriers to use, these technologies will change how we interact with brands, each other and the world around us. Here are a few videos/presentations:

Introduction: I kicked off the day with an overview of topics discussed and the pace of technological change—and how it’s influencing the way we do business

Keynote: Building an Ecosystem for Web Apps: Rahul Roy-Chowdury, Product Manager, Google (41 mins) Rahul spoke to us about an exciting innovation that Google has brought to market—the Google Chrome Web store—and the evolution of web apps, and how connectivity, offline/online storage and semantics give meaning to them.

Apps Everywhere: Mike Scafidi, Technology Director & Paul Gelb, National Mobile Lead (32 mins) From waking up and heading to the office to catching a plane and a night out, Mike and Paul take us into a future where our refrigerators talk back and facial recognition helps recognize contacts at a conference. And yes, there is an explanation for the mid-morning mobile traffic usage bump.

Case study: The Unilever Greenhouse Platform and Amazon Web Services: Norm Driskell, Director of Service Operations (45 mins) Norm shares a case study on how Razorfish created a digital marketing platform that leveraged the cloud to support, monitor and host one of the world’s largest portfolio of brands.

Marketing in the Age of Big Data: Pradeep Ananthapadmanabhan, Chief Technology Officer, VivaKi Nerve Center (26 mins) Marketers now have to contend with huge amounts of data—from websites, campaigns, mobile activity, social media, location-based, etc. Pradeep shows us one way marketers can make sense of it all.

Open Digital Services: Salim Hemdani, Group VP, Technology & Basel Salloum, Group VP, Technology (41 mins) Salim and Basel introduces the concept of Open Digital ServicesSM, a way for businesses to open up their APIs and accelerate innovation.

Case study: Mercedes-Benz Tweetrace: Ray Velez, Chief Technology Officer (5 mins) In the interest of time, we didn’t get into the technical aspects of creating the world’s first Twitter-fueled race, but here’s a great overview of the case study.

The Interface Revolution: Luke Hamilton, Associate Director of Emerging Experiences & Steve Dawson, Technology Lead, Emerging Experiences (27 mins) Luke and Steve brought their toys along to show us how gestural interfaces are changing the way marketers can interact with their customers.

Concluding Remarks: Ray Velez (2 mins) Special shoutout to @totkat, the summit’s most prolific tweeter and winner of a Motorola Xoom!

Approved presentations and videos can be downloaded from Slideshare if you’re interested. Hope to see even more of you at next year’s summit!

The summit might be over, but let’s keep the conversation going! What are your thoughts on cloud-enabled technologies, views on mobile, or predictions on gestural interfaces? Feel free to leave any thoughts in the comments section.

Using Microsoft Kinect to create new natural experiences

Razorfish’s Emerging Experiences team is using Microsoft Kinect to create new natural experiences. These natural experiences bring computing power to a larger and larger audience. Combined with the adoption of touch on smart phones, plus the large format, physical experiences continue to benefit from digital augmentation. We’ve been really impressed with the Kinect SDK(from Primesense), as EE Technology Director Steve Dawson mentioned it’s been better at recognizing hands than other gesture technologies. Since the hand can take so many forms it’s hard to recognize. Once you recognize the hand, people can naturally take control, kind of like a orchestra conductor. Check out the video here from the Fast Company Article.

From Threads to Coroutines

Twitter’s recent move to Netty brought with it significant performance improvements, highlighting the importance of network and concurrency architecture. Netty looks like a great framework, making the most of what Java provides today - but at heart, Netty must still be based on multithreading. Mysterious problems are therefore expected. What is really needed is for the world to move past multithreading (any day now, please) which is just a terrible concurrency paradigm.

Coroutines are a better idea. You can do that in python now as of 2.5, via enhanced generators. Prior to PEP 342, the idea was first popularized among python fans via the stackless fork. This approach has proven useful for high-throughput networking, but not many other languages support this paradigm.

ECMAScript almost added it in version 4 – and how marvelous that would have been! The need is especially poignant in the thread-starved world of browser programming. The pushback came from concerns about interpreter complexity, but that feature isn’t as hard to add to interpreters and compilers as people think.

Imagine making an AJAX call, where the call stack of your calling function is set aside and the thread released for browser use. Once the server responds, your function resumes – and it is as though you never left! Isn’t this just the obvious way to do it? I remember one scenario recently where I needed to solicit user input in the middle of a recursive backtracking algorithm, except - oh yeah, ActionScript can’t. Of course I could and did restructure around this language deficiency (Turing completeness and all) - but shouldn’t we instead be structuring our code in whatever way maximizes algorithmic clarity?

So why not fix it? As further evidence of feasibility, Mozilla JavaScript found a way - and as predicted, this lets us do I/O in the way we should.

Now all we need to do is get ECMAScript on board. And while we’re at it, why not Java, .Net, and other thread-based langauges? Let’s stop threading, and start generating. All the event-based I/O frameworks we see popping up are fine workarounds in the interim, but let’s address the core language deficiency instead.

What infrastructure changes are required when working with Amazon Web Services?

Working with cloud services like Amazon Web Services requires significant changes to the way we look at core capabilities taken for granted in traditional infrastructures. Imagine a world where you only get 5 static IPs or where load balancers are software based. Or what happens if I am using traditional software that requires technologies like Microsoft Active Directory. The following section highlights the learnings we have had in those areas.

Infrastructure and AWS:

While the AWS infrastructure allows operation teams a large amount of flexibility in terms of provisioning and managing resources, there are a few limitations of the infrastructure that teams have to be aware of and design around.

Addressing EC2 instances consistently

Almost every addressable infrastructure element (e.g. EC2 instances, elastic load balancers, RDS database end points, etc) has a dynamic IP. An EC2 instance has an internal IP / DNS and an external IP / DNS. This means that the internal name resolves to the internal IP and the external name resolves to an external IP. AWS recommends that the internal IP be used when addressing instances internally as this will ensure that traffic is routed to the instances internally rather than going out to the external network and coming back in. Both the internal and external IPs are dynamically allocated - this is done to facilitate failover and also because static IPs (especially external ones) are a very limited and scarce resource.

This also means that if the instance is terminated and re-instantiated there is no guarantee that the element will retain the same IP as before. This is especially important when using EC2 instances. Any kind of restart - stopping and restarting the instance (restarting it immediately does not cause this), instances that are brought up on another EC2 node if there is a failure on the original node, etc will cause the IP address of the instance to change. This can be a challenge especially if some other component has to address the instance or a component running on the instance e.g. the instance could be hosting an internally visible search engine or a database engine that is addressed by other components within the infrastructure by a search URL or a JDBC url that has the IP address / hostname of the host server.

NOTE : We need to verify that for a vanilla instancethe  external name resolves to external IP internally? Anyone?

One way to overcome this is to have a static IP associated to an EC2 instance. Once a server is assigned a static IP, it has been seen that the external DNS name when resolved internally, resolves to the internal IP of the server as opposed to the external IP of the server.

Assigning a static IP doesn’t mean that the EC2 instance will have a static address for the lifetime of the server, just that it will have a static address as long as the instance is up. If the instance is rebooted (either by the user or automatically on a failure), the intense will come back up with a dynamically allocated internal and external name and IP. Once the instance is back up, the static IP can be associated back to the instance. This could mean that instance startups have to be monitored and startup events scripted to achieve this automatically.

Another thing to note is that while the instance is being associated with a static IP, it will be unavailable for a small period of time while the association takes place. In our tests we have seen this time to be anywhere between 5 and 20 minutes.

An important point to note is that each AWS account is limited to only 5 static IPs that they can use. This is because static IPs are a rare commodity. If a user requires more than 5 IPs, they have to submit a case to AWS support who will then review the case for approval.

Active Directory(AD) / Domain Name Servers (DNS)

As previously noted, addressing instances / services on an EC2 instance is a challenge given the dynamic addresses of the servers. This is especially a problem when setting up something like an Active Directory within EC2. One way to mitigate the addressing of servers within your environment is to use a DNS server with a static IP address associated with it. Once a DNS server is setup within your environment, it should be easy enough for the administrator to allocate DNS names to these servers. Each EC2 instance is then configured to use the internal DNS server as the primary DNS server. In cases where these individual servers have their dynamic IP change on outages or restarts, the administrator can update the DNS records appropriately, allowing the servers / components to continue to access the services on these EC2 instances without having to be aware of the new dynamic IP.

Since the DNS server becomes key central sub-system within the architecture, it would be a good practice to have a secondary DNS server (possibly setup within another availability zone), also setup with a static IP, as a backup in case the primary DNS server goes down.

**Elastic Load Balancing **

Elastic Load balancers are a great resource when you want to have external traffic load-balanced to a group of servers. For maximum availability, performance and redundancy, these servers are distributed within more than one availability zone. This allows you to add or remove servers that are servicing requests during peak load and non-peak traffic hours. ELBs can also be configured with auto-scaling triggers such that server instances are added to or taken off automatically from the available pool of servers servicing user requests, when certain thresholds (e.g. CPU utilization, memory high-water marks etc.) are reached. Beneath the hoods, these ELBs are managed such that any outages in the underlying instances that service the requests at the ELB layer are handled automatically with minimal down time, to ensure that the ELB layer is available as much as possible.

An ELB is allocated an external IP and a dynamic DNS name. Unlike EC2 instances, it cannot be allocated a static IP. It should always be addressed by its dynamic DNS name as the IP can change on failover.

The one disadvantage of ELBs are that they are always externally facing i.e. one cannot setup an ELB such that it is visibly only internally. Thus they are not suitable for cases where you have a set of EC2 instances that have to send requests to a pool of internal servers. In these cases, users are forced to setup software load-balancers (like HAProxy or XXX) and manage them on their own. Users are also responsible for ensuring redundancy and avoiding single points of failure in such cases.

ELBs provide 2 layers of load handling - one at the inbound gateway layer and the other at the target server pool by the EC2 instances that the user adds to the ELB configuration. If there is a lot of inbound traffic at the ELB endpoint, AWS can handle this higher load by bringing up new ELB instances. AWS then updates/adds DNS records for these new instances and uses DNS round-robin to distribute load among these ELB instances. As long as the end-users continue to address the ELB by the dynamic DNS name, the system will leverage the additional ELB capacity and result in better performance. AWS can also increase the capacity of the internal servers that form the ELB servers e.g. use medium or large instances instead of small instances to server as ELB servers.

Root DNS:

A typical web application setup on EC2 leverages ELBs that load-balance external traffic to a pool of internal EC2 instances. As we have seen above, users should always address the ELB via its dynamic DNS name rather than its external IP address to be able to leverage the scaling that AWS may provide internally in times of load or outages. Not being able to use an IP address can become a problem, especially when you want to address an apex record for a DNS zone to an ELB.

e.g. Assume that you are using the “” zone for your application. In a typical scenario, you will CNAME over the DNS records for to DNS name of the ELB that you setup for your account.

e.g. zone :

www 3600 IN CNAME

But a typical use case is to have to ability to setup traffic from the domain i.e. “” to be also serviced by the web-application (i.e. by, you need to have the apex record (@ record) point to the ELB. But since you don’t want to use the dynamic IP of the ELB, and since you cannot use a CNAME for a DNS apex record, you may be forced to setup a dummy server (again with appropriate redundancy for failover) that can handle requests for “” and have that dummy server redirect requests made to “” to “” , thus allowing the requests to be served eventually by the ELB fronted web-application.

The NoSQL Movement

The NOSQL movement that has recently taken on a momentum of its own, NoSQL is not an out pouring of frustration with databases, but is taken to stand for ‘Not Only’ SQL, and is seen as response to technical architectures where traditional database/data query technologies are not the ‘de facto’ approach to data storage and management.

Personally I believe that this movement began with Eric Brewers CAP theorem in 2000. Brewer was talking about trade-off decisions that need to be made in highly scalable systems. He postulated that there were three core system requirements that any distributed system had to exhibit those of Consistency, Availability, and Partition tolerance hence the ‘CAP’ of CAP theorem. However he went on to make the distinction and state that any scalable system could only be guaranteed to have any two of these attributes.

The result of this was that Werner Vogels CTO of Amazon in 2008 coined the term ‘Eventual Consistency’, i.e that Availability and Partition tolerance were the running attributes that were most important and that system inconsistencies could be resolved in time i.e that continuous consistency is not usually required. As continuous consistency is the raison d’etre of the database, this has meant that the classic role of the RDBMS has changed or at least has undergone a rethink in the context of extremely scalable distributed processing. The fact many of the Internet scale businesses (Google, Amazon, Facebook etc) have been instrumental in developing ‘open source’ NoSQL products for their own line of business applications has added to the impetus behind this movement. The very nature and price point of cloud computing and storage for our enterprises, has meant that we are able to store, process, and analyze much more of our own business data, more often than not we are able to make use of the ‘NoSQL’ tools that have been developed by these leading internet business.

Products like Hadoop/Hive originally developed by Google for doing massive data mining using the Map/Reduce pattern have evolved into Cloud based services like ‘Elastic Map Reduce’ from Amazon, and has been taken on as a top-level project by the Apache foundation, and thus available to use at the public or private cloud level by any company with the requirement to process petabytes of data.

The NoSQL movement has also had increasing adoption and impetus from those application domains where the problems to be solved do not necessarily fit snugly into the classic ‘Relational’ model. A wide range of NoSQL products has sprung up which use different logical models to represent ‘semi-structured’ data these include Map/Key-Value oriented (Voldemort, Dynomite), Column Familiy oriented (BigTable, HBase, Hypertable), Document oriented (Couch DB, MongoDB ), and Graph oriented (Neo4j, Infogrid)

At the same time traditional schema based database structures are more often considered too inflexible to accommodate changing business models. Our own experience of working with O2 on a telecoms product catalogue has lead us to consider a NoSQL solution as the natural path in the ecommerce platform evolution. The O2 business model and the types of products that it sells now include not only handsets, tariffs, and accessories but broadband, games, ringtones, consoles, financial services, content, and could progress onto many other as yet not envisioned physical and logical product offerings. We have been looking at NoSQL ‘Graph’ databases as a way of modeling products, their attributes, relationships, and eligibilities to customer segments in an attempt to develop a product catalogue that can continue to offer flexibility as the business evolves.

The case outlined is not uncommon, whereby data, and the approach to dealing with it necessitates a cloud technology landscape, one that is a departure from the intrinsic constraints of fixed datacenters and the RDBMS. Another good example of this is Razorfish’s own EDGE platform. Historically this application and its data would have made exclusive use of Enterprise level database infrastructure, hosted in a proprietary datacenter in a DR configuration. The volume of data however has been growing exponentially, as has the compute power required, necessitating an entirely new approach. This approach has been to use Amazon EMR consisting of the Apache Hadoop/Hive/HBase product sets, to both host and process the increasing data volumes. This approach makes the most use of the utility of Amazon’s cloud as well as the frameworks that others have positioned on that cloud i.e NoSQL databases.

It can be seen that patterns for managing data in modern web centric enterprises, through the use of the new application class of NoSQL on the cloud, are a better choice for anticipating change and trends (compared to RDBMS solutions) given their support for unstructured data, horizontal scalability through partitioning, and high availability support.

Near Field Communications - Primer

This article from Ars-Technica is a great high-level primer on Near Field Communications (NFC). NFC is a very exciting technology that has actually been around in one form or another for years. Many folks consider it an evolution of the contactless payment systems already out there from MasterCard and Visa. In Asia it’s been around for a while with FeliCA. There are well known standards, like ISO/IEC 14443, supporting the wireless communication, especially around payments. The standards leverages two types of data communication, type a - Miller encoding and type b Manchester encoding.

NFC with poster and phone

Keep in mind, likening it to the contactless payment can suggest that’s the only usage. However, there’s lots more potential applications. Think of it this way, hold your phone around 4CM(up to 20cm, but most will be 4cm) from whatever it is you are interested and get more information. Looking at a car, hold it near the side view mirror and get cool videos, looking at a tent (yeah I like camping), hold it near a tent pole and get stats on the tent, etc. I think you get the point. However, these are only one way communication examples.

While NFC stands on the shoulders of Radio Frequency Identification (RFID), it is different in a couple of ways. One difference is two-way peer to peer communication. So, the NFC device(i.e. phone, camera, laptop, etc.) can communicate back and forth with the tag. Traditional RFID is one way. Lots of new applications are enabled through two way communication. Envision applications like an NFC-equipped digital camera that could transfer an image to an NFC-equipped TV set for viewing. Or an NFC-equipped computer could transfer mobile apps to an NFC-equipped mobile handset. Or shoping at a Pharmacy, hold up the phone to a tag and get a coupon, and on and on. It’s always nice to see technologies like RFID start to catch up with the long-term vision.

Given the communication requires close proximity, that inherently helps security. In addition to the proximity requirement, encryption is available as well. That isn’t built into any of the standards, but is feasible and likely important in personal and financial applications.

Razorfish Agile Offering

The offering is targeted at helping our clients adopt more Agile best practice to achieve better results faster and more efficiently. Our Agile team brings tremendous passion to this offering and is looking forward to taking it to the next level. The offering itself consists of a training program and methodology that we can deploy internally and with clients. In addition to core training on cross-discipline agile, beyond just software development, we also offer organization support on how to align with a lean organization that can support Agile Product Development. Here’s a link to the press release for more on our offering.

Here is a recent blog post from Forrester which was an early affirmation of our thinking

Fully automating releases to production and The Toss Test

I just read this article on web ops 2.0, basically the movement to automate pushing builds to production. There’s been a lot on the web recently on the topic, this slideshare presentation from flickr is great. A proof point that we can release a lot more often to production, especially with agile/iterative teams that include QA practices as part of development as opposed to a separate effort. The analogy between spock as development and scotty as operations is pretty spot on. It highlights another important point, which is while the technology is there, we also need to bridge the different personalities.


The ‘toss’ test in the article was funny. So, to determine the success of your environment. Basically grab any machine, rip it out of the rack and throw it out of window, preferrably a high window. Can you automatically re-provision your systems and return to a previous state? Or question two, your sr. engineer runs away to Alaska(I didn’t want to toss anyone out a window:)), can your operations proceed as normal?

So, not only does fully automated provisioning enable faster releases to production, but it also hardens your infrastructure and protects against failure.

Leveraging cloud APIs and technologies like Puppet, Chef, Kickstart, Rightscale, etc. is what makes this all possible. Of course, these capabilities come compliments of cloud computing and the ability to automate through cloud api’s and inexpensive, on-demand compute capabilities.

Chrome web store

Last week, Google released the Chrome Web Store. Not all reactions were overwhelmingly positive, but having played with it for the last week, I like it. Even though in some cases the ‘apps’ are not much more than mere links to various web applications, in others, the applications are quite unique and different from the standard web sites. Having one single place to browse for new apps is a fun activity. The fact that the applications show up in my browser when I launch a new instance is very convenient.

It is going to be interesting to see how this further evolves, but as this engadget article describes, it is sure to result in further UI innovations.

Java is still doing fine, it's the framework not the language

It’s really amazing to see all the innovation that keeps coming out of languages and frameworks. Obviously languages like Ruby and even more so, frameworks like Ruby on Rails are needed to challenge the status quo. Ruby and Rails helped pulled Java and .Net into the stateless realm of the web and out of heavy session memory, cumbersome non-service oriented architectures. We were having a conversation with the DAY CTO, David Nuescheler, who pointed out the architectures of old based on stateful sessions like Struts don’t really have the flexibility needed for the web of today. The lesson is that language frameworks and languages need to change to be more RESTful and based on service oriented approaches. Frameworks like Spring and Groovy Grails help provide those best practices for Java for example.

So, it’s great having all these new languages out there, and given we are doing more web development outside of the browser, there’s room for more diversity, which is great, but Java is still doing fine according to this ReadWrite Enterprise article. The interesting thing for Razorfish is that our clients are enterprise 500-1000 organizations and we don’t see a lot of the newer languages there, but we do see a lot of Java and .Net. Until there are drastic and readily apparent increases in language productivity, it seems it’ll continue to be more about the framework than the language. Frameworks like Java EE are extremely broad, where lightweight frameworks like Rails and Groovy Grails are very web centric, but not as complete.

That being said, I would keep my eye on node.js. Since node.js can run on just about any device, that may turn out to be the next ‘hot’ language. I thought it was neat to see the webos platform choose node.js as it’s application language. The folks at Joyent have some very interesting thoughts on node.js.

Social networks overtake Google in the UK

This post makes it clear that we need to think beyond just SEO when building web properties. If you think about it, SEO and now also integrating with Social Networks is probably the most important thing you can do for your digital property. If you are building a site that no one can find, what’s the point. I do think social will help grow you link popularity, at least tangentially. At the end of the day, link popularity is a factor in organic SEO anyway.


Announcing our Amazon Web Services Partnership

Today we announced our Amazon Web Services Partnership. Building on our Razorfish 5 report and the recent Razorfish Techology Summit, this is the latest affirmation of our commitment to supporting the marketing and business needs of our clients with the cloud. We are really excited about the opportunities to work together, especially with the rapidly evolving cloud infrastructure technologies in the marketplace. We’ve been growing our cloud computing practice quite aggresively over the last couple of years and see huge potential for our clients. Some of the immediate benefits we’ve seen for clients are the following:

  • Speed to market, getting up and running with servers and infrastructure is at a pace like never before. Minutes as compared to weeks.

  • Elasticity to scale up and down easily saving money and keeping up with unexpected demand. We rarely see good traffic forecasts, so this makes us more nimble to be ready for unexpected traffic spikes with campaigns and product launches.

  • Business solutions we have never dreamed of before. Using technologies like Amazon’s Elastic Map Reduce allows us to work with trillions of rows of data at very low costs. With traditional RDBMS’s this would have been both cost-prohibitive and practically impossible. Imagine using EMR to build a mini-google.

I am looking forward to the new and exciting things we can do with cloud computing.

Social Brands and Cloud Services Technology

Here’s a presentation I gave at a conference recently sharing our point of view on the linkage between social brands and cloud services technology. At the end of the day the power of cloud services helps to drive traffic in several powerful ways.

  • Social cloud services like Facebook Connect have already gotten privacy permission to share consistent with each user’s guidelines. This will help you better deal with the upcoming privacy rules in the European Union. The EU is requiring all anonymous cookie to ask a user for permission before placing them on their computer, so likely ad servers, analytics, etc. will be impacted.

  • Social cloud services help increase your ranking with Google, Bing, and Yahoo. For example, twitter is indexed by all those services. Facebook has started to let some content get indexed, but has to respect privacy guidelines in many cases.

  • Open API’s are empowering brands in new and innovative ways, with API’s both in and out. So, for example, tasty-d-lite uses foursquare and twitter to power their loyalty program. Or the Guardian Open Content API is being used to power new and innovative experiences like a new Guardian Content Roulette application.

OPA Social Unified through the Cloud on Prezi

Rob Scoble's Keynote from the Razorfish Technology Summit

Rob Scoble was the Keynote at the Fourth annual Razorfish Technology Summit. He gave us an outstanding overview of what’s going on with Cloud Computing and how it’s impacting the digital work we do with our clients. He also challenged us to think of new ways we can leverage cloud services and cloud infrastructure for the daily work we do with clients.Thanks for a great keynote Rob. Here’s his Prezi Presentation.

Copy of Scobles’s Razorfish Technology Summit Keynote on Prezi

Launch of the Razorfish 5:Five Technologies That Will Change Your Business

We launched the Razorfish 5 report today. We put a lot into the report and are excited to share it more broadly. In this report, we discuss the 5 technologies that are transforming businesses, including multi-touch and cloud computing. The findings are based on Razorfish’s experience designing and integrating complex technologies for clients around the world.The report explores the recent advances and upcoming developments of five significant technologies. Key findings include:

  • Cloud services and open APIs will become essential for social brands, making it easier for businesses to tap into the consumer’s social graph.

  • Reliance on the cloud’s infrastructure will continue to grow as the need for real-time scalability becomes increasingly critical for survival.

  • Multi-touch technology, which has already become mainstream in consumer devices, will infiltrate retail and business environments so extensively that it will become expected.

  • Improved hardware and connectivity will help mobile make the final transition into cloud-based data that allows the user to learn the world around her in real time.

  • Agile and iterative Web development will open new doors for innovation by allowing developers to innovate and adjust products based on immediate customer feedback.

Thanks to everyone who contributed to the report:

Writers/Contributors Shiv Singh Tobias Klauder John Cunningham Steve Dawson Luke Hamilton Paul Gelb Mike Scafidi John Ewen

Marketing & PR David Deal Lauren Nguyen Katie Lamkin Crystal Higgins-Peterson Heather Gately Jennifer Li

2010 is definitely the year of the app

I’ve always been a fan of applications. I remember using my early palm apps with pure joy. Even a simple app to calculate gas mileage or track rides, so happy that innovators like Palm paved the way. Handango was the first iTunes app store. With real money on the table we see folks flocking to Android, some are even fleeing iTunes for more open pastures of Android. The biggest name to flee being the developer of the iPhone Facebook application. But back to my point, it’s more exciting to think of applications everywhere.

For example, the Microsoft Sync platform in Ford cars will enable application development. Imagine an application to count how many times I hit the brake on my commute home or my favorite and application that tracks my various different routes home and tells me on average which is fastest.

Now HP have gotten into the mix with applications for your printer. One of the first being an Fandango application that lets you print your movie tickets right from your printer. If you need to print it anyway, skip the computer or portable device.

CNN had a great writeup on why it’s the year of the app. I forgot to mention what’s happening in TV’s. With Yahoo TV, a bunch of manufacturer’s are making it easy for basically Konflabulator desktop gadgets to end up on your TV. Samsung is encouraging folks to build apps into their TV’s and blu-ray players.

Technology Predictions for 2010

Razorfish’s Matt Johnson outlined his predictions for content management over at our CMS blog, Many of his predictions will hold true for web technology at large as well. I see traction and opportunities for:

  • Cloud Options:We will see further movement towards cloud solutions, and more vendors providing SaaS alternatives to their existing technologies. It ties into the need for flexibility and agility, and the cost savings are important in the current economic climate.

  • APIs and SOA:Functionality will be shared across many web properties, and the proliferation of mini apps and widgets will content. APIs are becoming a critical element of any succesful solution. This is also driven by the increased complexity of the technology platform. Solutions we now develop frequently incorporate many different technologies and vendors, ranging from targeting and personalization to social capabilities.

  • Open Source:Not only in content management, but in many other areas, Open Source will start to play an important role. Examples are around search, like Solr, or targeting with OpenX. Cloud computing also further drives the expansion of Open Source. As companies are looking to leverage cloud solutions for agility, the licensing complications with commercial solutions will drive further open source usage.

What do you see as additional trends?

Another new chapter for Razorfish

As we have moved into the Publicis umbrella there have been a lot of things to be excited about. Being part of Microsoft was amazing, we grew our skills and made lasting relationships. Our strong technology agnostic skills in Java, LAMP, have grown even stronger adding depth in .Net as well.  As always we search for the right technology to solve business challenges.

In my new role as CTO of Razorfish one of our first big events is the technology summit currently planned for the first week in February 2010. Publicis is clearly excited, as are we, to bring out deep technology skills to the large client base at a company the size of Publicis. Change is always change and very exciting. Below are some of the thoughts on the technology aspects of the acquisition coming out in the press and from Publicis’s executives.

Here’s what Harley Manning from Forrester has had to say..

The firm has much stronger design capabilities, both for user experience and what we call “brand image”. Plus – and this is just my opinion because we did not evaluate them on this – it[Razorfish] has stronger technology capabilities as well. The latter is important because there are some agencies out there with very strong tech chops, including IBM Interactive and Sapient. And it will become even more important as interactive moves to high function multi-touch mobile devices, or even stationary multi-touch devices like the way cool Coke vending machine Sapient displayed at Forrester’s recent Customer Experience Forum. Because ultimately, a great design has to actually work in order to deliver a great customer experience.

Brandweek captured some quotes from David Kenney

We got it at good terms,” said David Kenny, managing director of VivaKi, Publicis’ digital unit. They bring much more technology. I think that’s important. Their clients think very highly of them in terms of being able to technologically deliver, and that’s stronger than anything we have, including Digitas.”

In an interview with paidContent, Kenny pointed to Razorfish’s technology as well as its extensive global reach as key reasons behind the merge.

SharePoint Conference 2009 - Day 3

Day 3!  The whole reason I am at the SharePoint Conference this year is because I am helping our client present their SharePoint case study in one of today’s sessions.

I scheduled some lightweight sessions in the morning, starting with the fun Building Sharepoint Mashups With SharePoint Designer, Bing Maps and REST Services.  This session was really pretty straight forward.  Using a data view web part to retrieve data from an MSN and Twitter RSS and/or REST feeds and then using XSLT to display maps mashup data (Google or Bing).

Before lunch, I went to Best Practices for Implementing Multi-Lingual Solutions on SharePoint 2010 to see what new things has 2010 in store for Variations.  While there are big changes in store for multi-lingual solutions, they are more on the admin/UI side.  The biggest improvement is the performance gains in building the Heirarchy Creation as timer jobs.  From a UI perspective, the chrome is now also localized based on User preferred language selectable from all the language packs installed.  And as much as I shake my head when I hear this from people, SharePoint 2010 DOES NOT TRANSLATE YOUR SITE CONTENT AUTOMAGICALLY!

I met my client for lunch and we proceeded as a group to Breaker E - our session room.  We presented “Kraft: Migration of Consumer Facing Websites to SharePoint” to a roomful of people and a few came up for questions, comments and leads after the session.  We consider it a success!  That was of course the highlight of my day and everything else was just blah after that point ;p  If you missed it, or are interested in watching the video of the presentation, a copy of the deck and a video of the presentation is up and available on the SharePoint Conference site.  You would need to login with your Windows Live ID.

I spent the afternoon going to Developing Social Applications with SharePoint 2010; and Customizing the Visual Studio 2010 SharePoint Deployment Process. In 2010, comments, ratings, my network, RSS feeds all come out of the box.  The social features available in SharePoint 2010 are ok but not good enough yet, IMO.  This is one area where I think the focus is still more in ECM implementations rather than the Internet.  The Manager/Employee methapor just will not work in the real world.  And though, I was told by the Product team that it could be implemented in an Internet scenario, as shown in their Adventureworks demo - I will have to form that opinion once I’ve seen their Adventureworks demo site.  Deployment has indeed been made simpler in VS2010 by being able to compile and deploy from VS2010 to a local SharePoint instance.  But for deploying between environments, and betwen farms - WSPs are still the best way to go.

This evening’s event is Ask The Expert and SharePoint Idol, a Rock Band competition.  I thought for a sec about joining a team but changed my mind.  I had fun watching them though.

SharePoint Conference 2009 - Day 2

The challenge I always have with these conferences is the plethora of choices available to attendees.  I already know what topics I want to focus on:  WCM; Architecting, Developing and Building public facing internet sites, and Social features in 2010.  But even so, there are still time slots where I have narrowed down the choice to 3, and then I have to make the tough decision and hope that I made the right choice.  For the most part, I decided to always go to a 300 or 400 level session, and then just watch the video and the deck online for the 200 sessions I missed.

For the 9am slot, I had to choose between Advanced Web Part Development in VS 2010 and Introduction to Service Applications and Topology.  The architect won over the developer so I went to the Service Applications session. Essentially, 2010 SSP (Shared Service Providers) is replaced by the new Service Applications architecture. You build service applications that can live in a separate Application Server, and you call it from clients, in this case a SharePoint web front end via proxies.  I’m not sure if this is a correct simile, but I kinda liken it to old DCOM architecture. This makes it easier for organizations (and frankly, ISVs) to build Service Applications that can be deployed once and then used in multiple SharePoint web apps, and more, multiple SharePoint farms.

There’s a follow-up session to this about Scaling SharePoint 2010 Topologies for Your Organization, but I skipped that in favor of Overview of SharePoint 2010 Online. SharePoint Online is another product in Microsoft’s “Software as a Service” offerings.  It is essentially a service where Microsoft hosts and manages SharePoint for your organization.  This is part of Microsoft’s Business Productivity Online Suite (BPOS) which also includes Exchange Online, Office Live Meeting, Office Communications Online, Dynamics CRM Online. It is good for small or medium size business but can also be considered for the enterprise in some special cases.  The important thing to note is that this does not have to be an all-or-nothing decision.  SharePoint online is supposed to complement/extend your on premises infrastructure, not necessarily replace it.

In the afternoon, I agonized over Developing SharePoint 2010 Applications with the Client Object Model, Microsoft Virtualization Best Practices for SharePoint but ended up going to Claims Based Identity in SharePoint 2010.  The client object model was really getting a lot of good tweets during and after the session and I see a lot of opportunities there for us to pull SharePoint information via client calls, i.e., Javascript or Silverlight.  The virtualization session focused on Hyper-V so I didn’t feel too bad about missing it. In the Claims Based Identity session, Microsoft introduced their new Identity Framework and explained how it works.  This essentially works like Kerberos where essentially SAML tokens are created.  The good news is that it supports AD, LDAP, SAML.  The bad news is that it doesn’t support OpenID and other standard internet auth schemes/standards… yet.

I wanted to know more about composites and the new Business Connectivity Services (BCS) so I went to Integrating Customer Data with SharePoint Composites, Business Connectivity Services (BCS) and Silverlight.  BCS is one other new thing with 2010 that is interesting.  Allowing SharePoint to create External Content Type that can pull data from external LOB data opens up a lot of possibilities, but most of the demos I’ve seen so far only connects to 1 table.  In the real world, we would be connecting to a more complex table, in a lot of cases - pulling heirarchical data and I wanted to see how this works - more importantly, will it support CRUDQ features.  This session finally demo’d how to connect using a LINQ data source.  Didn’t see the CRUDQ part though, because the demo was read-only data.

For the last session of the day, I chose between Securing SharePoint 2010 for Internet Deployments (400) and SharePoint 2010 Development Best Practices (300).  So of course, I chose the geekier session since security is a hot topic on public facing sites.  However, this is probably one of the more disappointing sessions for me as this was really more targeted towards SP IT Pros than developers.  It is more about hardening your servers and protecting your network.  All these considerations even come default already in Windows 2008.  I probably would have enjoyed the best practices session better even though I was afraid they will be filled with “duh” moments.  I have to check that deck out though, it produced some funny tweets.

Day 2 is also the night of the Conference Party.  This year, the theme is 80’s night at The Beach (Mandalay Bay) with Huey Lewis and the News providing music and entertainment.  Too bad I missed it.

SharePoint Conference 2009 - Day 1

I’m at the SharePoint Conference in Vegas this week. Registration and Exhibit Hall started Sunday night, but sessions officially started Monday. I am tweeting all day during the conference, follow me (@mmdeluna) if you are interested. You can track tweets using #spc09. I will be posting daily summaries. Stay Tuned!

Registration and Exhibit Hall

This year’s conference is SOLD OUT. Compared to last years 3,800 attendees, this year’s 7,400 attendance is a testament to how big SharePoint has been adopted in the enterprise. Registration was pretty well organized and the badges are smart cards that are being scanned (optionally) by vendors for mailing list subscriptions and contests; and are also scanned by event managers for session attendance. Most of the vendors I saw in the Exhibit Hall are from Document Management Services - scanning, annotating, encrypting, converting, etc. And then there are the normal partner vendors: ISVs, SIs, Training, Data Recovery, Content Migration and Professional Services. Having said that - the give aways were a bit lame :)


There were 2 keynotes scheduled on day one, which lasted the whole morning. You would think that it wasn’t smart to have 7,400 attendees to sit still for almost 3 hours but Kudos to the presentation team, they pulled it off. Steve Ballmer did his FIRST SharePoint Conference keynote, one of the last few things Bill used to do that he hasn’t done yet. Tony Rizzo and the others did a great job on the demos doing enough to whet the appetite of all the geeks (like me) in the room. Here are the items that “struck” me during the keynotes. I am hoping to attend some of the sessions that show these in action.

  • There’s a HUGE emphasis on SharePoint and Internet facing sites. So much so that MS has renamed their products and services to emphasize this. Expect licensing prices to reflect this change

    • Intranet Products: MS Sharepoint Foundation 2010 (formerly known as WSS), MS SharePoint Server 2010, MS Fast Search Server 2010 for SharePoint

    • Internet Products: MS SharePoint Server 2010 For Internet Sites (STD, ENT editions) and MS Fast Search Server 2010 for Interet Business

  • Oh yeah - Steve Ballmer features Kraft Foods on his keynote - Nice! I wonder if this will drive attendance on our session (Wednesday, 1021 @ 1:15 pm)

  • SharePoint 2010 goes on public beta in November - don’t forget to download

  • SharePoint Online (SharePoint in the Cloud)

  • SharePoint Workspaces (Groove Makeover)

  • SharePoint Composites - I need to know more about this.  Interesting.

  • Developer tool integration in VS 2010. One-Click build, deploy and debug >> AWESOME!

  • Powershell Scripting - say goodbye to STSADM

  • New External Content Type / BCS (formerly BDC) - opens up possibilities with integration to backend systems. I’m very excited about this

  • SharePoint Service Applications - say goodbye to SSP

  • Improved List Performance and Caching - taxonomy navigation (tags and labels)

  • New and Improved Central and Site Admin UI - it’s AJAX yo!

  • Built in Spell Checker - it’s the little things…

  • Our PLDs and PLAs will like the improved support on standards specially WCAG

  • Some Social Computing features out of the box - ratings, notes/comments, blogs, wall (My Network)

  • VS 2010, SharePoint 2010 running on Windows 7 - 64 bit mobile development machine. yay!

Steve made a point by saying he didn’t think there’s any software out there that competes directly with SharePoint. Jeff Teper implies the same when he compares SharePoint to a Swiss Army Knife. Both videos are available online for viewing at the SPC09 website.

The list just goes on and on! There are way too many things to get excited about in 2010. I am hoping to get into the details of a lot of these in the upcoming sessions.

Day 1 Sessions

For the breakout sessions on day 1, I selected a couple of SharePoint overview topics.  One was SharePoint 2010 Overview and What’s New and more specifically for developers, Visual Studio 2010 SharePoint Developent Tools overview.  These sessions give me enough information on the overall features available so I can make a more informed selection in the coming days.

Cloudfront, Amazon's Caching Delivery Network (CDN)

Speed differences between Amazon S3 and CloudF... Image by playerx via Flickr

It’s nice to see Amazon moving into the CDN space with their Cloudfront offering, it seems like the CDN market can definitely use some fresh look at the challenge. It looks like it builds off your usage of Amazon S3 but with an accelerator finding the closest cache server to deliver your content. With this approach it doesn’t seem like a great fit as a CDN for any architecture. The chart on the right is an interesting comparison.

I’ve been intrigued over the last couple of years with Coral Caching. Peer to peer open source caching seems like it’s ripe with opportunity, wouldn’t it be cool if my mediacenter pc, apple tv and other laptops that sit at home idle during the day could be leveraged to help offload servers. I guess it’s a balance of saving power and sleeping or turning off the box vs. using less server power.

This is a diagram of a Wikipedia:Peer-to-Peer ... Image via Wikipedia

Reblog this post [with Zemanta]

Razorfish Technology Capabilities Differentiates

It’s great to get some market recognition for all our efforts. In this post Forrester recognizes that technology differentiates and our addition to Publicis will help strengthen their market position. This captured our attention

‘_“What about _Razorfish? The firm has much stronger design capabilities, both for user experience and what we call “brand image”. Plus – and this is just my opinion because we did not evaluate them on this – it has stronger technology capabilities as well. ‘

And validation of what we’ve been saying from the start

Because ultimately, a great design has to actually work in order to deliver a great customer experience.”’

Reblog this post [with Zemanta]

Taming IE6 and a "Drop IE6" rebuke

During the development of any project that involves HTML, there’s always a nagging question in the back of your mind:  “How broken will this site be in IE6?“  Here’s an article that will reduce the amount of worrying you do when fixing your site to work in IE6.  It covers the majority of issues you’ll encounter when working with IE6.

Definitive Guide to Taming the IE6 Beast

The article covers:

  • conditional comments

  • target IE6 CSS Hacks

  • Transparent PNG-fix

  • double margin on float

  • clearing floats

  • fixing the box model

  • min/max-width/height

  • overflow issues

  • magic-list-items appearing

It’s probably the last article on IE6 specific CSS techniques you’ll ever need to read.  Required reading for all PLD’s.

On the topic of IE6 and whether or not we should still be supporting it, here are some thoughts.

IE6 support seems to be waning, but we still have plenty of clients that are still running IE6 exclusively on their work machines, so until they upgrade to Windows Vista / 7 we’ll continue to have to support them.

In the past year there have been a few campaigns to get people to upgrade like,, and   Also, Google just announced that YouTube wouldn’t support IE6 anymore in the near future.

Sadly, the more I thought about just saying “no more IE6 support”, the more I realized that the people that were running IE6 at this point couldn’t upgrade.  They are usually either on older machines (Windows 2000 or earlier) or their IT won’t upgrade because of a legacy web-based application depends on it, like a CRM or ERP app.    These applications aren’t upgraded often, and they are definitely not upgraded during a recession.

Full IE6 support is vital for any site that caters to business users (IT issues / older computers), international users (older computers), or a large percentage of the public (lots of people don’t upgrade their computers/OS when all they do is browse the web with them).

Here’s a good chart that shows the trends for various browsers / versions from Oct-04 to May-09 based on data from

It shows IE6 usage just below Firefox usage in May-09.

As much as I dislike “fixing” the sites I work on to work with IE6, I think we’re going to have to do it at an agency level for another year or so.

Windows : Mac :: Google : (?) - It happens to be "bing"

Believe it or not but Microsoft’s newly evolved search engine bing is nothing less than the answer of the above analogy question. You must have solved many of these analogy questions during your SAT exam. When I apply my knowledge and understanding to the question“Windows : Mac :: Google : (?)”, my answer is “bing”.

Remember when Apple release it powerful Mac OS X operating system, almost every single pundit in the industry agreed that Mac may not be the most powerful system for productivity but it is certainly the coolest and the best for creativity. Young people care more for creativity and less for productivity. Windows simply felt old and outdated. Mac operating system took off since then and market share for Mac is still increasing at a growing rate. This time bing hits a home run. After the launch of fully revamped bing search engine (or as they call it decision engine) Google search kind of feel old and out dated.

Some people say bing = “But it’s not Google”. I could not agree more. As Windows can never be Mac, bing can never be Google. In fact this time it is better for Microsoft to craft its own path and define its own destiny in search engine. The cool informational home page imageand vibrant brand colors have some kind of enigmatic charm. The creativity of bing may not appeal to mass population yet but as I know it many young kids simply love bing. They think bing aligns more with their taste.

While there are many features that make bing cool (and I will let you find out most of them yourself), I believe for me following are the best:

  • Home page image: Everyday bing has a fresh new vibrant image that simply amazes me.

  • Image search: Image search has never been so good. Every single query provides with an option to pick an image and do “find similar”. Additionally the in-browser searching and navigation of the images is absolutely next generation thinking.

  • Video search: This is where despite the ownership of “You Tube” Google has failed to show value. Thumbnail preview in bing is simply outstanding.

  • Shopping: Oh the cash back program. Spend 1,000 on a gadget and get some money back to go have a ice cream this summer. J

  • News: Google has a strong lead in this field but bing has gone a step further by adding ability to search only blogs… I love that feature. People’s opinion matter more than journalists’ won’t you agree?

The list is simple but shinny. bing’s appeal to my creativity (which is not abundant) is noteworthy. I am an avid Google user but nowadays I go to bing for more than ½ of my search queries. It is fun and engaging. Google search results are still the best. So when I am searching for something very critical (items on which my job is dependent on :)) I still believe in Google but for everything else I go to bing…

If you have not already then just start “binging it”…

– Salim Hemdani

Brand experiences and MOSS

All the video’s for the 2009 MIX conference are posted online ( and they are interesting for anybody developing on the Microsoft platform. One of the session posted is about developing consumer facing brand sites, presented by Tony Jones, Technology Director for Razorfish. It touches on the unique challenges that an user experience driven company like Razorfish faces when leveraging MOSS for their web experiences, and outlines approaches on addressing these. Highly recommended. How Razorfish Lights Up Brand with Microsoft SharePoint

Cloud interoperabiltiy

Image representing Google App Engine as depict... Image via CrunchBase

It’s great to see cloud computing pushing for deep interoperability. This MSDN post covers some interesting topics around the manifesto and also speaks a bit about some interesting demos showing integration between Google’s App Engine and Azure. Very excting.

“At MIX, we highlighted the use of our Identity Service and Service Bus with an application written in Python and deployed into Google App Engine which may have been the first public cloud to cloud interop demo.”

Reblog this post [with Zemanta]

agile and pair programming

One of my favorite topics in agile and iterative development is pair programming.The question is can we make it happen more and do we want to try it more? I’ve typically seen it on the smaller and more isolated projects. It’s a fascinating concept and the research, while minimal that I have found, tend to say two developers get more high-quality work done than one independently.

I also found it interesting that it’s a core tenant of education in some circles today. When my wife was getting her master’s in education, pair learning was one of the approaches she was taught. Often it’s three or four, but two works. All her classrooms are broken into small groups and I guess there’s lots of educational research that backs up the fact that students learn more working in small groups than alone. I’ll ask her for some research links.

I ran across an Distributed Agile post today that dug up some more research backing up pair programming. Here’s what the post had to say

“Pairing is the most powerful tool they’ve ever had. Skills don’t matter as much as collaboration and asking questions. Goal for new hires is to get their partner hired. Airlines pair pilots… Lorie Williams at the University of North Carolina did an experiment and found that the paired team produced 15% less lines of code with much better quality”

Reblog this post [with Zemanta]

OpenCloud Manifesto = Skynet

The Terminator album cover Image via Wikipedia

Exciting to see folks pulling together some Cloud Computing standards to help us live seamlessly across the different cloud vendor offerings. I heard it first on the This Week in Tech podcast, it’s starting to sound a lot like the Terminator’s version of Skynet. Get it, clouds, skynet… Anyway, iIt seems like this should be a requirment for redundancy, not to mention the ability to move based on feature needs. Yes, sure, Cloud Computing is inherently redundant, but only across one vendor. It’ll also help us realize the best value and features quickly. I think the other thing it shows is that there is a lot of room for competition. It won’t just be the big players out there.

The manifesto itself was also interestingly absent of any of the big players. A quick glance at the manifesto and it’s refreshingly light, which is good. It seems to think more standards are on the way, which may or may not be a good thing. I think there are lots of lessons to be learned from standards like Corba or ws-deathstar. All in all good news and a recognition that the clouds are moving quickly.

Reblog this post [with Zemanta]

Ready for Web 3.0/Semantic Web?

When mainstream media starts talking about SemanticWeb, one can infer that it is not just another buzz within research labs.  Recently the magazine The Economist, and BBC online covered this topic.  Early this month Thomson-Reuters announced a service that will help in Semantic Markup. 

SemanticWeb Primer

The term Semantic Web was first used by Sir Tim Berners-Lee, the inventor of World Wide Web, to be “… day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines”.    The most significant aspect of semantic web is the ability of machines to understand and derive semantic meaning from the web content.   The term Web 3.0, was introduced in 2006 as a next generation web with emphasis on semantic web technologies.  Though the exact meaning and functionality in Web 3.0 is vague, most experts agree that we can expect Web 3.0 in some form starting in year 2010.

There are two approaches to extract semantic knowledge from web content.  The first involves extensive natural language processing of content, while the second approach places the burden on content publishers to annotate or markup content.  This marked-up content can be processed by search engines, browsers or intelligent agents.  This solution overcomes the shortcomings of natural language processing which tends to be non-deterministic; furthermore determining the meaning depends not only on the written text, but also on information that is not captured in written text.  For instance, an identical statement by Jay Leno or from Secretary Hank Paulson may have a totally different meaning.    

The ultimate goal of web 3.0 to provide intelligent agents that can understand web content , is still a few years away.  Meanwhile, we can start capturing information and start building constructs in our web pages to facilitate search engines and browsers to extract context and data from content.  There are multiple ways of doing semantic markup of web content that is understood by browsers and search engines.    

Semantic Search Engines

On Sept 22, 2008 Yahoo announced that it will be extracting rdfa data from web pages.   This is a major step in improving the quality of search results.  Powerset (recently acquired by Microsoft) is initially allowing semantic searches on content from, which is a fairly structured content.  While Hakia uses a different approach, it processes unstructured web content to gather semantic knowledge.  This approach is language based and dependent on grammar.

Semantic markup s- RDFa, and microformats

W3C consortium has authored specifications for annotation using RDF an XML based standard, that formalizes all relationships between entities using triples.  A triple is a notation involving a subject, object and a predicate, for example “Paris is the capital of France” the subject being Paris, the predicate is capital, while ‘France’ is the object.  RDFa is an extension to XHTML to support semantic markup that allows RDF triples to be extracted from web content.

Microformats are simpler markups using XHTML and HTML tags which can be easily embedded in web content.  Many popular sites have already started using microformats.  Flickr uses geo for tagging photo locations, hCard and XFN for user profile.  LinkedIn  uses hcard, hResume and XFN on user contacts.

Microformat hCard example in html  and resulting output on browser page.

 Atul Kedar         Atul Kedar      
Avenue a Avenue A | Razorfish
1440 Broadway
       New York,,        NY       USA   

Atul Kedar Avenue A | Razorfish1440 BroadwayNew York, NY USA
Microformat hCalendar entry example with browser view:

October 16th : September 18th, 2008 Web 3.0 at Sunnyvale, CA

Tags:  SemanticWeb



 As you notice from the above examples microformats can be added to existing content and are interpreted correctly by the browsers.  There are many more entities that can be semantically tagged such as places, people and organizations.   Some web browser enhancements (Firefox) recognize these microformats and allow you to directly add them to your calendar or contacts by a single click.  

Automated Semantic markup services and tools

Another interesting development is in the area of automatic entity extraction from content, these annotation application or web services are being developed.  Thomson Reuters is now offering a professional service OpenCalais to annotate content. PowerSet is working on towards similar offerings.   These service reduces the need for content authors to painfully go thru the content and manually tag all relationships. Unfortunately, these services are not perfect and need manual crosschecking and edits.  Other similar annotation services or tools are Zementa, SemanticHacker and  Textwise.

Next Steps

As Web 3.0 starts to take shape, it will initially affect the front end designers involved with the web presentation layer, as organizations demand more semantic markup within the content.  In due course , CMS architects will have to update design of data entry forms, design of entity information records in a manner that facilitates semantic markup and removes any duplication of entity data or entity relationships.  Entity data such as author information, people information, addresses, event details, location data, and media licensing details are perfect candidates for new granular storage schemes and data entry forms.



Google Chrome - Why it is different?

Yesterday, Google launched its first beta version of open source web browser called Chrome. Some people consider this launch as Google’s attack on Microsoft’s IE and some regarded this launch as yet another browser to choose from. Google indirectly claims that market needed a fresh web browser. A browser that is written from scratch with a next generation thinking even though Chrome is built on WebKit an existing open source web browser engine. Google’s long term strategy behind this product is unknown but I believe it is the move Google should have made long time ago.

One of my old college used to say “Do not go under the spotlight if you cannot control the outcome on the stage”. Google’s business is solely based on internet and its growth. Google is under the spotlight on the stage of internet play from the beginning but it had zero control on the way people got to the internet and the way browsers interpreted the web pages. With Chrome, for which I predict it will have a substantial market penetration soon, Google will gain some control on the outcome.

Apart from the business strategy aspect, Chrome browser does have some neat technology advancements. Chrome has done every effort to make browser more stable, faster, clean, simple, efficient and safe. Few things that are specifically noteworthy are:

  • User controlled multi process browser which creates independent browsing environment making it fast, stable, scalable and safe.

  • Platform independent JavaScript Virtual Machine called V8 which converts JS source into native machine code for faster processing.

  • Smart conservative garbage collection methods for fast JS interpretation speed.

  • Open source Gears for development community to create additional features.

  • And lastly search feature for your browser history, suggestions integration on address bar, incognito mode for private browsing, sandboxing of plug-in controls, pop-up blocker, phishing sites security warning etc

Google launched Chrome only for Windows which forced me to restart my Mac in boot camp mode with Windows Server 2003. I am eagerly waiting for a Mac version which Google has promised to launch shortly.

– Salim Hemdani

New social media offering

Digitage Web 2.0Image by ocean.flynn via Flickr

We just announced a new offering with our social-media partner Pluck and their product code-named AdLife. AdLife will inject social media features like customer comments and user-generated content into digital advertisements such as banner ads or micro sites - in effect, turning mainstream ads into social media opportunities distributed across the digital world.

From a technology community perspective we have worked with Pluck on several clients to bring social media features to client’s websites. Plucks services are available through software as a service which enables us to drive faster solutions for our clients. The are some key elements such as dealing with a AJAX/Flash/Silverlight integration and still enabling SEO. Unfortunately, search engines are not able to deal well with rich internet applications yet, but we have some ideas on how to deal with this.

That being said, I think there’s a couple of critical element to why we have to make our sites social.

  1. We are social beings, our sites should follow. Communities will help us make our sites better, by adding the right metadata through comments or confirming where we get it right or telling us where we get it wrong - wikipedia anyone:).

  2. Without bringing social technologies to our client’s sites, the sites we build won’t be found. Organic search engines depend on social to surface pages. Remember google bombing? That was enabled through blogging and trackbackids, a key aspect of social. If your site/page isn’t enabled socially, it won’t rank in google, live, and yahoo.

  3. Outside of organic search people read things on the web, so they can send them to their social graph. Again, we mush make it easy for people to connect with their social graph, so people can passively or actively tell their community about the content.

At Avenue A | Razorfish, we’re one of the largest buyers of online media in the world and we’re partnering with Pluck, a social media technology vendor serving 2.5 billion impressions a month to bring this to life. For more information read the press release or read David Deal’s blog.

Zemanta Pixie

How cool is this new search engine - ?

By now almost everyone who keeps an eye on search market is aware of the launch of new search engine - And to my amazement most of us have already given a verdict on this new offering as to if this is a real Google killer or just another dying hope for those Google haters. Cuil has also got a lot of attention from Google lovers - this launch was a real attention grabbing hoax.

So what makes center of attention?

  • is developed by couple of engineers who have a hand in developing Goggle’s search engine. This particular fact has given them a lot of credibility. 

  • Secondly, this search engine indexes many more pages than Google does. I did not know that Google does not index the entire world’s web pages – may be cuil indexes pages from other planet’s ecosystem too but any how this fact makes an attractive offering. More pages means more results .

  • Finally, last but not least and I think most important is that does not remember your search queries – a big win for those privacy advocates who are trying to get Google for a long time now.

So what is your verdict of this cool or not so cool offering? I will share my first experience and for those who believe in “first impression is the last impression” I did not have a good experience. My first hit to on the day it launched resulted in a service not available page (too many hopefuls flocking on to Since then service has been bought-up and I have given it few more tries but I could not make up my mind.

As of today Google is still the kind of search for me and I bow to thee. What you guys think?

Yahoo! UI Blog recognizes Fred and team

Recently, the Yahoo UI blog recognized Fred Welterlin and team’s outstanding work on the Pulte Home’s new site. The post talks about the Yahoo UI components used and how we made the Javascript Library choice. This is typically a challenge for us given the great selection of libraries out there. It’s interesting how Fred highlights one of the drivers for choosing YUI being it’s usages of the Auto Complete Pattern. Congrats to Fred and team for being recognized for their great work.

ICANN threatens to change the rules of the domain name game

You may be used to typing in top-level domains (TLDs) like .com, .net or .edu when heading to websites, but the Internet Corporation for Assigned Names and Numbers (ICANN) hopes to change that with a decision to open new TLDs for registration, according to today’s Wall Street Journal.

Under the new rule, ICANN would let anyone with $50,000 to $100,000 register any TLD they want, so for example, a web address could become paras.wadehra, rather than

The WSJ has more on what the decision may mean for regular consumers and businesses, but there are also a couple ways it could change the Internet landscape for startups — most notably, domain speculators like Demand Media and Marchex.

Those companies, and other speculators, have plowed billions of dollars into millions of hot domain names, sometimes backed by high-profile investors like Oak Investment Partners or, for Marchex, public shareholders. The idea is generally to buy up lots of obvious domain names, like, which holds the sales record at $350 million. Most good names that are auctioned get less, but still routinely receive six figures.

Those domains are worth so much because of a kind of traffic called type-in traffic, which is distinct from search traffic from Google or linked traffic. Right now, if a web surfer — especially an unsavvy one — wants to find, say, exchange rates, they might type in hopes of finding an exchange calculator (they’d be disappointed).

Although the strategies of the two companies are different (Marchex, notably, wants to build out a locality-based content business), they both rely on one crucial assumption: that the dominant TLDs, primarily .com, continue to be the first thing people type in when they’re looking for something, whether it’s exchange rates or So what happens if ICANN manages to reeducate Internet users, and popularize sales of new TLDs?

The simple answer is that a lot of speculators will lose a lot of their own, and their investors’ money. While Demand and Marchex might be able to build up viable content portals around sites like, the money they plowed into those names will be meaningless — as well spent on or chicago.dr, or any other name you can imagine. The game will become even more about search, type-ins traffic will wither.

There’s a strong counter-argument to ICANN’s action having any real affect on .com, though. There are already dozens of top-level domains, but they are thinly used, even purposed ones like .mobi (for mobile phones). The introduction of more TLDs over the years has not seen sales of hot domains diminish, which by extension probably means speculators are making as much as ever.

That may hold true, or it may be that ICANN has finally found a way to shift attention from .com, with the possibility for new TLDs that are actually meaningful or logical.

And a final argument is that it does seem unreasonable that 10 or 15 years from now, we’ll still be typing .com in for every major website. The Internet is a place of rapid change, and at some point, .com will start seeming archaic and unnecessary. But any real change would require a massive re-engineering of the web’s user-interface, at the very least, so it’s hard to imagine what those changes might be from here.

Avenue A | Razorfish wiki mention in Infoworld

It’s pretty interesting how the press and our clients continue to find our internal knowledge management wiki to be interesting. Here Infoworld captures some thoughts from Shiv Singh on why we built the wiki. It’s all about bringing some of the innovations from the consumer facing world into Enterprises. Learning from the consumer world to help enterprises is going to take some time. What I think enterprises need to acknowledge is that collaboration isn’t easy, so the technology needs to make it easy. If we ask folks to open a ticket or get permission every time they want to contribute to collaboration it’s just not going to happen. Like Shiv said, we have found people behave just as professionally in the office as they do on the wiki, so let’s trust them to use open technologies.

It’s not just the features we are talking about either. The technologies behind these platforms are interesting as well. Some of the biggest most successful sites out there aren’t build on enterprise technologies. Mediawiki, the software behind Wikipedia, is built on PHP not Java or .Net. Not only can we learn from consumer facing behaviors, we can also learn from consumer facing technologies. What’s nice about technologies like PHP is their ability to start up quickly and change just as quickly. Something that has gotten harder and harder for J2EE and .Net. After all, the one constant with web sites, is change.

Google Gears and the offline/online trend

With Google Gears, Adobe Air, and Microsoft WPF there’s definitely lots of exciting changes in the desktop application area. Using the openness of the web to crack open the ‘closed’ nature of regular documents that we use today. At the recent Avenue A | Razorfish Enterprise Solutions summit, Andrew McAfee asked the audience who works on documents alone. Only one person in the room of 70  people raised the hand (still not sure why:)). The point is that we collaborate on everything we do and the traditional method of document revisions and changes is much slower than real-time changes and updates ala wiki style technology. The challenge is applying that to all the tools we use on a daily basis. How can we make code changes more collaborative and less of a check-in, check-out, merge model?

LiveMesh, the new 'synchronization' platform from Microsoft

Think about an online-offline silverlight-wpf application that synchronizes your files using LiveMesh.

I like the name. I finally see the Live brand starting to come together for Microsoft. Now all it needs is some more market awareness. So, what is LiveMesh? It’s basically a new, invite only for now, platform that allows people to sync across all devices. Windows only for now, but that seems like it will open up, especially since it can be expressed as ATOM, JSON, FeedSync, WB-XML, or plain old XML.

In a previous post, I spoke about Google Gears and their technology to bring together the off-line and on-line world. LiveMesh is actually

The more I switch across laptops, machines, etc. The more I yearn for a cloud to contain everything. I recently moved away from Trillian to Meebo, just so I had one less desktop application I was tied to. This way any machine I go to I have my instant messaging list available. Moving to a web based outlook as good as the desktop outlook would be a welcome addition. That being said, at the end of the day I want both. Especially as I write this post from the plane offline using the desktop application, Windows Live Writer.

Skype announces unlimited long-distance calls

Skype announced unlimited calling last month to over a third of the world’s population with the launch of its new calling subscriptions. The new subscriptions signal the first time Skype has offered a single, monthly flat rate for international calling to landline numbers in 34 countries.

The new subscriptions have no long-term contract. You can make calls whenever you want – at any time of the day, on any day of the week. From today, you can choose from three types of subscription – from unlimited calls to landlines in the country of your choice through to landlines in 34 destination countries worldwide.

However its not true Unlimited calling - all calls are subject to Skype’s fair usage policy which is set at 10,000 minutes per month (which equates to just about 5 hours of calling per day). Calls to premium, non-geographic and other special numbers are excluded.

Syndicated Client Starter Kit

When doing WPF development, a good source of information is One interesting download on that site is the Syndicated Client Starter Kit . It is a Starter Kit designed to make it easy to create rich, syndicated multimedia and content client applications. It has built-in ad-serving capabilities, and includes the sync framework that takes care of syncing, local storage, subscription management and the safe caching of authentication credentials. The MSDN reader sample application, and the starter kit itself, are available for download including source code.

Reviewing the source code is a great way to gain insight on how WPF applications can be structured, and some of the architectural patterns that are used within the code, such as the Command Pattern.

Another interesting aspect of this starter kit is that it uses SQL Server Compact Edition for storing data client side, and I think this is a great alternative to SQL Server Express. Even though both are free, SQL Server CE has a benefit of being more lightweight, and easier to deploy with your client application.

News Corp., AOL Pursue Yahoo Deals

Yahoo Inc. and Time Warner Inc.’s AOL are closing in on a deal to combine their Internet operations. But Microsoft is recrafting its assault plan by talking with Rupert Murdoch’s News Corp., publisher of The Wall Street Journal, about mounting a joint bid for Yahoo, people familiar with the matter said. Microsoft and News Corp. have yet to reach an agreement on joining forces but one person apprised of the plan described the discussions as serious. Such a deal would combine three of the biggest Internet properties: News Corp.’s MySpace, Microsoft’s MSN and Yahoo.

The AOL-Yahoo deal under consideration would include the repurchase of some Yahoo shares at a price above Microsoft’s offer. Taken together with a possible search advertising pact with Google Inc., the plan could give Yahoo an alternative to a Microsoft takeover – although many analysts and investors believe Microsoft will ultimately win out. At the least, Yahoo’s efforts could give it more leverage to negotiate a higher price from Microsoft.

Surface Launch

As mentioned in an earlier post, we worked with AT &T and the Microsoft Surface team to build a Surface application for AT &T retail stores. It was demo’ed 2 weeks ago in Vegas, and will be going live April 17 in stores in Atlanta, New York, San Francisco and San Antonio. See also the video below :

Microsoft Sends Letter to Yahoo! Board of Directors

Microsoft Corp. (NASDAQ: MSFT) sent the following letter to the Yahoo! Inc. (NASDAQ: YHOO) Board of Directors:

Dear Members of the Board:

It has now been more than two months since we made our proposal to acquire Yahoo! at a 62% premium to its closing price on January 31, 2008, the day prior to our announcement. Our goal in making such a generous offer was to create the basis for a speedy and ultimately friendly transaction. Despite this, the pace of the last two months has been anything but speedy.

While there has been some limited interaction between management of our two companies, there has been no meaningful negotiation to conclude an agreement. We understand that you have been meeting to consider and assess your alternatives, including alternative transactions with others in the industry, but we’ve seen no indication that you have authorized Yahoo! management to negotiate with Microsoft. This is despite the fact that our proposal is the only alternative put forward that offers your shareholders full and fair value for their shares, gives every shareholder a vote on the future of the company, and enhances choice for content creators, advertisers, and consumers.

During these two months of inactivity, the Internet has continued to march on, while the public equity markets and overall economic conditions have weakened considerably, both in general and for other Internet-focused companies in particular. At the same time, public indicators suggest that Yahoo!’s search and page view shares have declined. Finally, you have adopted new plans at the company that have made any change of control more costly.

By any fair measure, the large premium we offered in January is even more significant today. We believe that the majority of your shareholders share this assessment, even after reviewing your public disclosures relating to your future prospects.

Given these developments, we believe now is the time for our respective companies to authorize teams to sit down and negotiate a definitive agreement on a combination of our companies that will deliver superior value to our respective shareholders, creating a more efficient and competitive company that will provide greater value and service to our customers. If we have not concluded an agreement within the next three weeks, we will be compelled to take our case directly to your shareholders, including the initiation of a proxy contest to elect an alternative slate of directors for the Yahoo! board. The substantial premium reflected in our initial proposal anticipated a friendly transaction with you. If we are forced to take an offer directly to your shareholders, that action will have an undesirable impact on the value of your company from our perspective which will be reflected in the terms of our proposal.

It is unfortunate that by choosing not to enter into substantive negotiations with us, you have failed to give due consideration to a transaction that has tremendous benefits for Yahoo!’s shareholders and employees. We think it is critically important not to let this window of opportunity pass.

Sincerely, Steven A. Ballmer Chief Executive Office Microsoft Corp.

Yahoo!'s Board of Directors Responds to Latest Microsoft Letter

The Board of Directors of Yahoo! Inc. (Nasdaq:YHOO), a leading global Internet company, today sent the following letter to Steve Ballmer, Chief Executive Officer of Microsoft Corporation.

Dear Steve:

Our Board has reviewed your most recent letter with regard to the unsolicited proposal you made to acquire Yahoo! on January 31, 2008.

Our Board carefully considered your unsolicited proposal, unanimously concluded that it was not in the best interests of Yahoo! and our stockholders, and rejected it publicly on February 11, 2008. Our Board cited Yahoo!’s global brand, large worldwide audience, significant recent investments in advertising platforms and future growth prospects, free cash flow and earnings potential, as well as its substantial unconsolidated investments, as factors in its decision.

At the same time, we have continued to make clear that we are not opposed to a transaction with Microsoft if it is in the best interests of our stockholders. Our position is simply that any transaction must be at a value that fully reflects the value of Yahoo!, including any strategic benefits to Microsoft, and on terms that provide certainty to our stockholders.

Since disclosing our Board’s position with respect to your proposal, we have presented our three-year financial and strategic plan to our stockholders, which supports our Board’s determination that your unsolicited proposal substantially undervalues Yahoo!. Those meetings with our stockholders have also provided us an opportunity to hear their views.

We have continued to launch new products and to take actions which leverage our scale, technology, people and platforms as we execute on the strategy we publicly articulated. Today, in fact, we are announcing AMP! from Yahoo!, a new advertising management platform designed to dramatically simplify the process of buying and selling ads online.

Finally, our Board has been actively and expeditiously exploring our strategic alternatives to maximize stockholder value, a process which is ongoing. All of these actions have been driven by our overarching commitment to maximize stockholder value.

Our Board’s view of your proposal has not changed. We continue to believe that your proposal is not in the best interests of Yahoo! and our stockholders. Contrary to statements in your letter, stockholders representing a significant portion of our outstanding shares have indicated to us that your proposal substantially undervalues Yahoo!. Furthermore, as a result of the decrease in your own stock price, the value of your proposal today is significantly lower than it was when you made your initial proposal.

In contrast to your assertions about the effect of general economic conditions on our business, Yahoo!’s business forecasts are consistent with what we outlined in our last earnings call. As you know, we recently reaffirmed our Q1 and full year guidance, which is a testament to our ability to perform in line with our expectations despite the current economic environment. In addition, our three-year financial and strategic plan which we have made public demonstrates significant potential upside not previously communicated to the financial markets. This plan has received positive feedback from our stockholders, further strengthening the view that Yahoo! is worth well more as a standalone company than the value offered in your proposal, and would be even more valuable to Microsoft. Your own statements have made clear the strategic importance of Yahoo!’s substantial assets and capabilities to Microsoft.

We regret to say that your letter mischaracterizes the nature of our discussions with you. We have had constructive conversations together regarding a variety of topics, including integration and regulatory issues. Your comment that we have refused to enter into negotiations to conclude an agreement are particularly curious given we have already rejected your initial proposal, nominally $31 per share at the time, for substantially undervaluing Yahoo! and your suggestions in your letter and the media that you are considering lowering the value of your proposal. Moreover, Steve, you personally attended two of these meetings and could have advanced discussions in any way you saw fit.

As to antitrust, we have discussed with you our concerns. Any transaction between us would result in a thorough regulatory review in multiple jurisdictions. As a follow up to a recent meeting among our respective legal advisors we had on this topic, and at your request, we provided to you on March 28 a list of additional information we would need to further our understanding of the regulatory issues associated with any transaction. To date, you have still not provided any of the requested information.

We consider your threat to commence an unsolicited offer and proxy contest to displace our independent Board members to be counterproductive and inconsistent with your stated objective of a friendly transaction. We are confident that our stockholders understand that our independent Board is best positioned to objectively and knowledgeably evaluate our Company’s alternatives and to maximize value.

In conclusion, please allow us to restate our position, so there can be no confusion. We are open to all alternatives that maximize stockholder value. To be clear, this includes a transaction with Microsoft if it represents a price that fully recognizes the value of Yahoo! on a standalone basis and to Microsoft, is superior to our other alternatives, and provides certainty of value and certainty of closing. Lastly, we are steadfast in our commitment to choosing a path that maximizes stockholder value and we will not allow you or anyone else to acquire the company for anything less than its full value.

                                             Very truly yours,

           Roy Bostock                          Jerry Yang 
           Chairman of the Board             Chief Executive Officer

Now that's a cool table

We worked with AT&T and the Surface folks to build this desktop application that will be showing up in AT&T locations across the company. Some of the fun features will be the interactive map, real-time phone comparisons by placing two phones on the table and lots more. We built the application using .Net 3.5 and the Windows Presentation Framework for the core application. For back-end connectivity we leveraged the Windows Communication Framework and linq. Check out the video below.``

Putting JCR into action – Sling and µJAX

**Putting JCR into action – Sling and µJAX **

During last couple of years the enterprise content management community has been noticing a steady rise in the popularity of JCR aka JSR170. A lot of CMS vendors and application server vendors have pledged support to this initiative. For those who are not familiar with JCR, it is the Content Repository API for Java Technology - a Java standard advocated by Day software. JCR provides a “JDBC” equivalent access to content so that the application code does not have to know the details of the underlying vendor implementation. JCR aims at providing an elegant solution to some of the content management challenges faced by organizations dealing with a profusion of content repositories and CMS products. JCR promises “best of both worlds” by providing a single API to access and modify content stored in a DB or in a file system.

I have been tracking the evolution of products based on JCR implementation for a while and was quite disappointed initially to see that other than Day (the proponent of JCR) none of the major ECM vendors provided any real support to JCR based repositories. However recently the situation is getting much better with Alfresco offering a JCR-170 compliant repository and Day supporting the Apache Sling initiative that exposes the JCR APIs via REST.

Exposing the JCR API via a set of RESTful services is a wonderful idea and Sling is one of the first RESTful open source web application frameworks built on top of JCR. Sling uses Apache JackRabbit as the JCR repository. JCR is a tree structured or hierarchical content store. So naturally it makes sense to map tree-structured URLs directly to the tree structure of JCR so that the nodes can be accessed just like a file system with static files.

Before looking at Sling, I tried to build a blogging tool using the evaluation of copy of CRX (Day’s Communiqué) and the µjax library that came with it. Unfortunately I didn’t make much progress with it because of lack of supporting documentation and missing files etc. Then I downloaded Sling Launchpad (a ready-to-run Sling configuration, with an embedded JCR content repository and web server with some Sling components). The maven build went fairly smooth, and I was up and running in less than 10 minutes or so. Initially I used the CURL to post content into the repository and later switched to an HTML form to post the content into the repository. Content in the repository can be rendered using ESP server side JavaScript module that comes with Launchpad.

The biggest plus I see with sling is that it handles the normal GET and PUT methods out of the box and translates them into corresponding JCR actions to retrieve and post content into the repository. The blogging tool is a very simple form of CMS, but it can be extended easily by writing servlet code. Although I used ESP to render the page, Sling supports other scripting languages supported by the Java Scripting framework (JSR-223).

I see a lot of potential for Sling. If we can build a good interface for posting and editing the content in the repository (I have been bugging Atul Kedar to post his custom JavaScript library for building advanced content entry forms into some open source repository) it could beat many of the CMS products out there in the areas of customizability and flexibility.

RIAs and Content Management

Forrester recently published a report on Rich Internet Applications and Content Management. The report covers some of the key topics in this area such as organic Search Engine Optimization, changes in build and release management, and how to change Content Management to better support RIA. The content for the report came from interviews across to folks managing and building sites with these technologies. Mike Scafidi’s architecture around Search Optimized Flash Architecture (SOFA) gets a mention.

Share Google Spreadsheets via Google Gadgets

This is kind of neat, now you can share parts of your collaborative spreadsheets via a Google Gadgets. So, it can show up on your iGoogle page and other places. Neat way to pull folks into looking at your exciting number crunching;). Google also launched spreadsheet notifications. Tools lik eZoho Docs and Google Docs are very cool, no with the ability to syndicate more easily that should help adoption.

Read more here: There’s also some cool gadgets around visualization.

FCC Closes 700MHz Auction at $19.6B

Bidding in the FCC’s 700MHz auction closed March 18, 2008, after the auction raised a record $19.6 billion over 261 bidding rounds. The winners of the spectrum have not been disclosed as yet by the Federal Communications Commission.The results of this single spectrum auction surpass the $19.1 billion combined total raised by the FCC in 68 other auctions over the last 15 years. The proceeds will be transferred to the U.S. Treasury by June 30, earmarked to support public safety and digital television transition initiatives. The spectrum auction is part of the transition to digital television that will culminate in all television signals switching from analog to digital on Feb. 17, 2009. The FCC also placed conditions on the sale of the C block spectrum, requiring the winning bidder to build an open network to which users can connect any legal device and run the software of their choice.Before the auction began in January, Google committed to meeting the minimum bid in the C block. AT&T and Verizon were also interested in the spectrum. Although the FCC did not say when the winner would be announced, the current speculation is that the FCC will release the information by the end of March or early April.“The open platform will help foster innovation on the edge of the network, while creating more choices and greater freedom for consumers to use the wireless devices and applications of their choice,” FCC Chairman Kevin Martin said in a statement. “A network more open to devices and applications can help ensure that the fruits of innovation on the edges of the network swiftly pass into the hands of consumers.” looking under the covers..

A very cool site that gives us a look under the covers at what some of the super high-volume sites out there are doing to get their content and functionality to people. It’s amazing to see how some of the highest volume sites out there leverage open source so effectively. YouTube’s architecture looks very interesting. There’s also an interesting summary of the Microsoft Cloud plan.

“Beautiful Code: Leading Programmers Explain How They Think”

“Beautiful Code: Leading Programmers Explain How They Think” is yet-another-excellent book put out by O’Reilly Books. It contains 33 essays written by respected members of the Software Development community and each essay is a variation on the theme of how we define “Beautiful Code”. The range of topics is vast and includes subjects like “A Regular Expression Matcher”, “Distributed Programming with MapReduce”, “Python’s Dictionary Implementation”, and [one my favorites] “The Most Beautiful Code I Never Wrote”. Each of the authors does a fantastic job of being as concise as possible without sacrificing an accurate presentation and solution of the covered problem. 

Although the authors certainly deserve applause, the editors Andy Oram and Greg Wilson are truly the unsung heroes in this publication. I can imagine it was a difficult task to make these separately written essays feel more like a book and less like a collection of chapters. How they accomplished this, however, may not be immediately apparent.  What I found was that within each of the 33 individual essays, lies a unique engineering principle that will emerge in successful software development endeavors. Moving beyond each of those principles within each of those essays, I realized that it is only the developer who grasps these tenets who is capable of writing Beautiful Code. 

Before moving on, I would like to point out that “Beautiful Code” is a book written for advanced developers. If you do not have some degree of familiarity with topics like: Finite Automata, C Programming, Concurrency Problems, Protocol Stacks, Haskell, Cryptography, Algorithm Proof Techniques, and Kernel Development, this book is probably not for you. Even if you are comfortable with these topics, be prepared to encounter a few chapters that may need to be read twice or require a bit of internet sleuthing before you grok it. 

Below, I’d like to present an example of what I mean by “the essay” versus “the extracted principle”.

Chapter Three is titled “The Most Beautiful Code I Never Wrote”. It is written by Joe Bentley and it takes an unique look at the famed QuickSort algorithm. Without delving into the essay itself, Joe cleverly presents how the beauty of the famed QuickSort algorithm lies within the code that is not written at all. Two quotes I’ve highlighted from the chapter are:

“A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away” - Antoine de Saint-Exupery

“Simplicity does not precede complexity, but follows it” - Alan Perlis

I once overheard a seasoned developer say to a younger developer, “Develop a little, refactor a little. This is your mantra. Simple code simply works”. Developers who strive for simplicity, who strive to write code that not only “works” but “works well”, are developers who produce code with less bugs and require less maintenance. That is the extracted principle presented in this chapter and, ultimately, that is the difference between brute force workmanship and eloquent craftsmanship. 

Within each essay is a life lesson for anyone who is a practicing technologist and I highly recommend this book to anyone who fancies themselves as such. I suspect that “Beautiful Code” would fit nicely between “The Pragmatic Programmer” and “The Practice of Programming” on any development shop’s bookshelf.

The Tools Google Uses Internally

A web seminar Google held at KMWorld Magazine offered a great deal of insight into how Google manages projects and communication internally. The presentation by Google followed an employee through his first few weeks at the company, explaining the many tools he’s using: from the Google intranet MOMA, the Google Ideas site and Google Caribou Alpha, to Google Experts Search, “Googler Search,” and Google Apps.

Here are a few links to view the content of the presentation: mode=embed&documentId=080312113223-aa1c560259e24ac892c4e1cfa3f0c12d


I just finished a book I found hard to put down, Better, by Atul Gawande. Atul is a surgeon in Boston who writes for the New Yorker and has published a couple of books. I love his essays and thoroughly enjoyed his first book, Complications: A Surgeon’s Notes on an Imperfect Science. In Better, Atul takes an unabashed look at how the medical profession can do things well, better. He uses examples ranging from the US Military Iraqi battlefield techniques to a Minnesota doctor’s relentless focus on helping Cystic Fibrosis patients live longer. One of the most exciting things about his writing is his ‘spock’ like ability to examine the things that go well and the things that go poorly. It isn’t good or bad, it’s something we can learn and improve on.

So, how does this relate to building software? Well, at the end of the book he comes up with five things we can do to be a ‘positive deviant’.

  1. **Ask an unscripted question. **Often times we are caught up with the task at hand and we don’t the time to make more human connections with co-workers. I bet this would go a long way to helping inter-disciplinary understanding (See Troy’s Post).

  2. **Don’t Complain. **Keep the conversation moving in a positive direction. Sure we don’t need to ignore uncomfortable things, but after too many complaint sessions we all walk away feeling angry and sorry for ourselves. This isn’t likely to help us get better at software development.

  3. Count something. This one is always fun. Counting things is great. I love looking at the analytics of our internal wiki/blog tools. It’s a great way to get a sense of how much we are collaborating and finding the peaks and valleys. I once spent a couple of late night hours on a project writing regex statement to determine how many lines of code each developer wrote and how many bugs were assigned to them. As a team we loved looking at the numbers and worked to come up with different ways to look at performance and improve on it. This happens all too rarely.

  4. **Write something. **It’s the process more than the content, Working through an essay, blog post, or white paper helps us clear our thoughts and get a sense of our larger purpose. It’s good stuff, even if it feels like a challenge to find the time.

  5. **Change. **Something we hear with just about every project we start is, how can we get it down faster and once it’s down, how can we change it faster. It feels like the answer is change. This is what excites me about technology like Grails(see Jo’s Post) and Ruby on Rails. It’s what excited me about Java 1.0 over C back in the day. So, what are the changes we need to make to get things done faster? Can we do things before the user experience, creative and business teams finish or even start defining them?

Introducing the new FolderShare!

Yesterday, the Windows Live team released a new version of Foldershare along with their new website.

New features added are: • A new website designed to make managing your FolderShare libraries and computers even easier. • A new FolderShare with a better setup, a better system tray menu, and better performance on Windows Vista. • Improvements on the backend to keep FolderShare running more smoothly and reliably.

And all this is still FREE!

You can check out the new site, install the new FolderShare, and let the team know your feedback!

Introducing Grails

It was Tim Bray, who in his predictions for 2008 mentioned that “_Rails will continue to grow at a dizzying speed, and Ruby will in consequence inevitably become one of the top two or three strategic choices for software developers_”. Indeed, in my eyes the rails paradigm which blends well-known best-practice patterns such as MVC web applications with the notions of Coding by Convention and Don’t Repeat Yourself doesn’t only speed up and simplify development, but also keeps your code base clean. There are no more tedious configurations files which all repeat themselves yet have the touching fingerprint of every developer’s style. Developers can focus on the most important issue at hand: the functionality. That’s why I’ve always been a huge proponent of Ruby and Rails.

But why Ruby?

With a fairly high adoption rate for Ruby on Rails, some problems have been discussed across the internet (follow this discussion, for example):

  • Performance, especially in large installations

  • Interoperability issues with other applications / technologies

  • Deployment onto existing infrastructures and application servers

What other frameworks are out there that provide the same benefits? For today, let’s dive a little deeper into Grails.


The Grails project (formerly known as Groovy on Rails) started in July 2005 and the project just recently announced the long awaited 1.0 release on February 18, 2008. Grails is built on top of the J2EE stack and combines the best-of-breed tools Hibernate, the Spring Framework, Groovy Scripting, as well as support for my favorite IDE, Intellij Idea (no worries, Eclipse works too). All mature tools and languages that have been used in the Java community for a long time now. Consequently Grails provides:

  • Lower learning curve for J2EE developers

  • Easy integration points with existing and new Spring and J2EE applications

  • Enterprise deployment of grails applications as a WAR/EAR file

  • Similar performance to a Java application (see performance tests)


Let’s dive straight into a demo in which we will create a few persistent domain classes and integrate an existing CMS backend.

Grails Demo Architecture (Click to view the demo)


The following features are included in the 1.0 release:

  • Test-driven development via unit tests and mock objects

  • An environment-sensitive configuration mechanism out of the box

  • Ruby-on-rails like build and development command line tools (that can create war files)

  • OR-Mapping via Hibernate (with full support for all Hibernate-supported databases)

  • The ability to weave in advanced Hibernate functionality and provide custom OR-mappings if needed (for that remaining 5% of the functionality that require special tuning)

  • An MVC web-layer based on GSPs (Groovy Server Pages) which includes custom taglib and sitemesh support

  • Ajax support (Grails ships with Prototype but has plug-in support for Dojo, Yahoo UI, and Google Web ToolKit)

  • Internationalization Support (Does Ruby do this nowadays? :))

Caveat Emptor?

Yes, the Grails community just released version 1.0 in January 2008. Yet the framework has been in the works for about 2½ years now and I was quite impressed with the amount of features and finesse already contained. The foundation of the framework is built on top of well-established open source frameworks which should minimize the risk of using such a new library.

Would I recommend Grails for a huge enterprise project? Probably not. But I would wholeheartedly recommend every developer and architect to look at this great alternative to traditional web development.

Fading the Tech/Creative Line

The common mentality with respect to creative and technology process integration involves a relatively solid line that separates the two disciplines and work streams. Creatives do their concepting, draw up wireframes, create visual assets, and then toss them over the line. Technologists pick these up, create the front-end HTML, create the back-end code, and wire them up to create the system. That is an extremely over-simplified description of both sides of the line - but it represents the general perception of many clients and peers in our industry.

The agile movement has made great strides toward integrating project teams. But the focus here has been on bringing business and end-user representatives into the process and advancing the project through small, iterative cycles. (Again, a dramatic over-simplification. I’m a huge Agile proponent.) The iterative cycle keeps all disciplines (plus business stakeholders and users!) engaged throughout the project. Great progress! But, within an iteration, the line often remains. Both the creative and technical teams are tightly engaged with the business and user representatives. But they’re only loosely engaged with each other.

There are many reasons for this. On a given project the creative and technical teams are often from separate internal organizations - at best. At worst, they’re from separate companies altogether. Beyond that, they often think, talk, and act very differently - making it hard to relate. Right brain, left brain stuff. There is hope, however.

One of the most satisfying things about working with Avenue A | Razorfish is experiencing the blurring of the tech/creative line. As a company with strong marketing, creative, and technology capabilities that are integrated on many projects, we’ve learned through experience how to work and communicate with each other. That is one of our strongest value propositions to customers. We’ve proven that the line can be blurred and there is significant value in doing so. However, it is within the last year that I’ve seen the most substantial fading of the line.

This can be attributed to the popularity and demand for rich Internet applications. RIAs require a much greater level of cross-discipline understanding and cooperation. Windows Presentation Foundation and XAML have done the same thing for desktop applications (and the web, with Silverlight). There is a great whitepaper on the WPF designer/developer workflow entitled The New Iteration. Definitely worth a read. It specifically addresses WPF, XAML, and the Expression tools, but many of the points apply more generally to RIAs, as well. The value proposition is well stated:

"Ultimately, the new collaboration means that iteration of a project can now happen in a much more fluid way. There is no longer the “one-way street” where a change to a specification downstream means a radical reworking of the entire application. The result opens up new possibilities for collaboration between the designer and developer, where a kind of dialogue is possible with the potential to foster greater creativity."

It is that last point - the potential to foster greater creativity - that excites me the most. Technologists are often in a restricting role. We have to set boundaries that the creative team must work within so that we’re able to deliver on their promises. Rather than promote cooperation and collaboration, this can create an “us versus them” mentality. However, with RIAs I have noticed a great change. Technology and creative teams are pushing each other to expand the solution horizon, rather than constrain it. Both teams are equally invested and sharing their unique perspectives, which results in far better solutions.

It may be intuitive that a shared sense of ownership, varying perspectives, and close collaboration will have positive results on a project. As a consultant “back in the day”, when the Internet and HTML were new, I saw the same level of enthusiasm and collaboration between technical and creative teams. But as technology and creative techniques matured, the tech/creative line solidified. As a result, the solutions became somewhat cookie-cutter. That’s not to say companies weren’t launching sites with great creative and technical work. But truly remarkable solutions are conceived when both the technical and creative limits are stretched and combined to produce something truly unique. I’m thrilled to be back in this sweet spot. The industry as a whole seems to be following suit. But unless a deliberate effort is made to avoid falling into comfortable patterns, truly remarkable solutions may once again join the endangered species.