Google search box

Top Blogs

Tuesday, September 16, 2008

Build Still Important, but Deployment is King!

Articles - Agile CM Environments
Written by Brad Appleton, Robert Cowham and Steve Berczuk   

sepjour08newsbig.jpgBuild and Deployment are subjects which are dear to our hearts and we have written quite a lot about them over the years. While the details may change from year to year as technology evolves, the underlying principles remain the same.

Regarding building, we are going to take the opportunity to provide a guide to some of our previous articles which still hold true.

For deployment, we suggest that the rise of  web applications and Web 2.0 shows deployment as one of the main drivers for many aspects of application development today. Thus our headline "Deployment is King!"

Agile Building
The principles of building software remain the same. We need builds that are:

  • Reliable
  • Repeatable
  • Incremental when appropriate
  • Fast (enough)
  • Automated - not dependent on manual steps
We were satisfying these principles using Make 30 years ago - the challenge has been remaining true to these principles as applications have become larger, and as the technological landscape has evolved.

In The Renaissance Builder we highlighted (with our tongue-in-our-cheek) the importance of the build engineer. A key point is that companies should not be looking to put an inexperienced engineer on the job and expect to get good results, and yet we see this frequently! Make sure your build engineers are technically capable and also respected by other members of the team. This will ensure that the requirements of your build system don't become neglected.

Building for Success reviews some standard working patterns and their effect on the build. It goes on to look at what affects build velocity - in summary:

  • Fast machines
  • Shared build servers 
  • Incremental builds and different build tools 
  • IDEs 
  • Shared library problems
The Importance of Building Earnestly -addressed the costs of a manual build process, and the importance of getting people started and motivated to improve their process.

Our article Agile Build Promotion: Navigating the "Ocean" of Promotion Notions covers build patterns (Private System Build/Integration Build/Release Build) and build promotion patterns - promoting them into deployment.

Continuous Staging: Scaling Continuous Integration to Multiple Component Teams shows the use of a staging area as part of continuous integration, and leading to release builds.

Deployment is King! Web 2.0 and Beyond...
The greatest application in the world is not much use if you can't deploy it - which means get it into the hands of your customers and end users.

Back in the age of the shrink wrapped application, deployment became increasingly difficult and a real drain on resources and a restriction on growing businesses. Even with internet distributed applications, you were usually asking your users to download and run installers on their desktops. For Windows applications that means the complexity of registry updates, component registrations, DLL Hell and other similar problems. For large corporations with locked down desktops, rolling out a new application, or an update, was a major piece of work.

So, what if you can get your application into the hands of end users without them having to install extra components onto their desktops? This removes a significant hurdle to adoption and makes it much easier to expand your market.

Early web applications were based on HTML with server side applications doing the work. This made deployment much easier as users are accessing a server (or servers) which is (or are) centrally controlled. Deployment of upgrades is to a few servers rather than thousands or tens of thousands of desktops. The problem with typical web applications was that they were a very poor relation in terms of user interface and capabilities when compared with fat client applications.

So in response to this problem, Web 2.0 technologies such as AJAX have been a way of getting much richer applications into the browser. This gives a user experience much closer to fat client applications, and yet with the deployment advantages of a server based web application.

We contend that the deployment advantages of web applications have conquered a major portion of the application development landscape - deployment is King!

Silverlight vs Flash vs Flex vs AJAX (HTML/CSS/Javascript)
And yet there is a battle being fought over the technologies to be used for web development.

Brad Neuberg explains in his blog post regarding the competitive forces at work between Microsoft's Silverlight, Adobe's Flash and Flex and the existing boundaries being pushed via Javascript, CSS, AJAX etc.

As Brad mentions:

At this point, in April 2008, Silverlight's problem doesn't seem to be its openness. Silverlight's problem is its installed base, or lack thereof. The only statistic I was able to find was a Microsoft claim of 1.5 million downloads per day. It's hard to compare that with the study commissioned by Adobe listing Flash 9 penetration at better than 95%, but clearly the installed base Microsoft is looking for is not yet there, given that MSN Video is still Flash-based.

And in another corner (to Microsoft and Adobe), we have companies such as Appcelerator providing open source development environments to aid developing RIAs (Rich Internet Applications).

AJAX Web Application Development
There are a variety of issues to consider including the language and the framework you use for your web application. Issues include:

  • Productivity of development (and enhancement)
  • Performance and scalability
  • Ease of deployment
  • Ease of finding hosting environments
You need to make appropriate choices amongst these often conflicting requirements. Languages such as Ruby and the Ruby on Rails framework have rightly become popular because of their productivity. And yet it has turned out that deploying and managing the production applications was not that easy.

As Ian Bicking writes in What PHP Deployment Gets Right, there are a variety of reasons for the popularity of PHP as a web development language instead of Ruby or Python:

Why is it important that PHP has a CGI like model? Mostly because it lets two groups separate their work: system administration types (including hosting companies) set up the environment, and developer types use the environment, and they don't have to interact much. The developers are empowered, and the administrators are not bothered.

As Ian explains, PHP applications power many web applications due to the simplicity of the deployment and ongoing administration. This includes large content management systems such as Joomla which powers CMCrossroads itself!

As regards Ruby on Rails, Mitchell Hashimoto discusses some of the history of Ruby deployment options in his blog, and gives this nice diagram:

rc0908-1.jpg

Ruby Deployment Options - from Mitchell Hashimoto's blog.

Other happenings in the Ruby/Rails world include:

  • Passenger (or mod_ruby), and Ruby Enterprise Edition - advantages include improved performance and reduce memory requirements
  • Capistrano - a tool to automate deployments, originally targeted at Rails applications, but applicable to most applications - a candidate to be the Make of the web deployment world. From an agile point of view, this provides the "one click" deploy option.
Hosting companies targeting Rails applications - initially you could only deploy your Rails application if you had your own server (either directly or via an ISP). Cheaper shared hosting options just did not exist. These days, companies such as EngineYard provide cheap shared hosting for Rails applications "out of the box". The fact that there is a market here is a sign of progress.

Getting SaaSy with your Mashups
The rise of the SaaS (Software as a Service) model is based on centrally managed (easy to deploy) software, although with a variety of business reasons behind it.

From Wikipedia:

Software as a service (SaaS, typically pronounced 'sass') is a model of software deployment where an application is hosted as a service provided to customers across the Internet. By eliminating the need to install and run the application on the customer's own computer, SaaS alleviates the customer's burden of software maintenance, ongoing operation, and support. Conversely, customers relinquish control over software versions or changing requirements; moreover, costs to use the service become a continuous expense, rather than a single expense at time of purchase. Using SaaS also can conceivably reduce the up-front expense of software purchases, through less costly, on-demand pricing.

Salesforce.com has its own definition of SaaS and how it is different.

Mashups are defined as:

In web development, a mashup is a web application that combines data from more than one source into a single integrated tool; an example is the use of cartographic data from Google Maps to add location information to real-estate data, thereby creating a new and distinct web service that was not originally provided by either source

Both SaaS and Mashup applications reduce the costs of deployment. Phil Wainwright on ZDNet writes:

Much has been written about the promise of mashups to become serious business tools - as well as the obstacles and challenges they must overcome along the way. It's only now, more than six years since the notion of mashups first came to the fore (they acquired the name a little later), that SaaS vendors and integrators are beginning to realize the full potential of the mashup for enterprise applications. As this first wave of commercial enterprise mashups comes to maturity, it is making clear once and for all the mashup's seminal role as the disruptive motor at the heart of the on-demand model.

Making Web Applications into Utilities
For other web application developers, there are a variety of challenges, including scalability and reliability of your servers.

Industry heavyweights such as Amazon with its EC2 (Electric Cloud Computing) and Google with its Google App Engine are providing services: the basic idea is that if you conform to the particular requirements of the service, then as a startup you simply pay for what use and not a penny more. Even heavy users can make savings over having to host their own infrastructure.

Deployment problem solved, but at the expense of conforming to the framework laid down by your chosen provider.

Browser Wars - Part Deux
As we have discussed above, the rise of Web 2.0 is based on the installed base of browsers with their associated capabilities (and inconsistencies as regards CSS and Javascript support). Microsoft's Internet Explorer retains the lion's share of the installed browser base but with Firefox in the ascendant, particular among those with a more technological bent (and not a locked down desktop).

Into this mix Google has just launched their Chrome browser. Some media hype would have it that this is about replacing the operating system, and as such is an attack on Microsoft with its Windows monopoly (and others consider this a slight overstatement).

And yet Google certainly has clout, and indeed the beta of Chrome is already good enough to influence the direction of browsers and the continuing rise of the web application using Javascript and other AJAX technologies. The jury is out...

Conclusion
Building is getting better and more agile, but you neglect it at your peril.

With the rise of Web 2.0 (and its successor), we are convinced that the requirements for successful (agile) deployment are driving the choice of technologies and application development frameworks.

For those developing non-web-based applications, the underlying deployment drivers of Web 2.0 are also valid - do whatever you need to do to make your deployment easier - and consider its requirements at the start of the lifecycle of whatever application you are building.

Brad Appleton is an enterprise SCM/ALM solution architect for a Fortune 100 technology company. He is co-author of Software Configuration Management Patterns: Effective Teamwork, Practical Integration, the "Agile SCM" column in CMCrossroads.com's CM Journal, and a former section editor for The C++ Report. Since 1987, Brad has extensive experience using, developing, and supporting SCM environments for teams of all shapes and sizes. He holds an M.S. in Software Engineering and a B.S. in Computer Science and Mathematics. You can reach Brad by email at brad@bradapp.net

Robert Cowham has been in software development for over 20 years in roles ranging from programming to project management. He continues his involvement in development projects but spends most of his time on SCM Consultancy and Training, now working with Vizim. He is the Chair of the Configuration Management Specialist Group of the British Computer Society, has a BSc in Computer Science from Edinburgh University and is a Chartered Engineer (CEng MBCS CITP). You can contact him at robert@vizim.com .

Steve Berczuk is a Technical Lead for an Agile Software Development consulting company. He has been developing software applications since 1989, often as part of geographically distributed teams. In addition to developing software he helps teams use Software Configuration Management effectively in their development process. Steve is co-author of the book Software Configuration Management Patterns: Effective Teamwork, Practical Integration and a Certified ScrumMaster. He has an M.S. in Operations Research from Stanford University and an S.B. in Electrical Engineering from MIT. You can contact him at steve@berczuk.com

Tuesday, August 12, 2008

Game testing

Modern video and computer games take from one to three years to develop (depending on scale). Testing begins late in the development process, sometimes from halfway to 75% into development (it starts so late because, until then, there is little to nothing to playtest). Testers get new builds from the developers on a schedule (daily/weekly) and each version must be uniquely identified in order to map errors to versions. They also test the durability of the game disc through a series of tests that can include how much damage the disc can take before becoming unresponsive, and how glitches can affect how the game runs.

Once the testers get a version, they begin playing the game. Testers must carefully note any errors they uncover. These may range from bugs to art glitches to logic errors and level bugs. Some bugs are easy to document ("Level 5 has a floor tile missing in the opening room"), but many are hard to describe and may take several paragraphs to describe so a developer can replicate or find the bug. On a large-scale game with numerous testers, a tester must first determine whether the bug has already been reported before they can log the bugs themselves. Once a bug has been reported as fixed, the tester has to go back and verify the fix works.

This type of "playing" is tedious and grueling. Usually an unfinished game is not "fun" to play, especially over and over. A tester may play the same game — or even the same level in a game — over and over for eight hours or more at a time. If testing feature fixes, the tester may have to repeat a large number of sequences just to get to one spot in the game. Understandably, burn-out is common in this field and many use the position just as a means to get a different job in game development. For this reason, game testing is widely considered a "stepping stone" position. This type of job may be taken by college students as a way to audit the industry and determine if it is the type of environment in which they wish to work professionally.

In software development quality assurance, it is common practice to go back through a feature set and ensure that features that once worked still work near the end of development. This kind of aggressive quality assurance—called regression testing—is most difficult for games with a large feature set. If a new bug is discovered in a feature that used to work, once it is fixed, regression testing has to take place "again".

Game testing becomes grueling as deadlines loom. Most games go into what is called "crunch time" near deadlines; developers (programmers, artists, game designers and producers) work twelve to fourteen hours a day and the testers must be right there with them, testing late-added features and content. Often during this period staff from other departments may contribute to the testing effort to assist in handling the load.

All console manufacturers requires that the title submitted goes through a series of rigid standards established. Failure to respect the required standard prevents the game to be published on the market. Many video game companies separate technical requirement testing from functionality testing altogether.

Sunday, July 6, 2008

Here is a simple and understandable difference that can clear your confusion between smoke testing and sanity testing.

SMOKE TESTING:

  • Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
  • A smoke test is scripted, either using a written set of tests or an automated test
  • A Smoke test is designed to touch every part of the application in a cursory way. It's shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

SANITY TESTING:

  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
  • A sanity test is usually unscripted.
  • A Sanity test is used to determine a small section of the application is still working after a minor change.
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Saturday, July 5, 2008