iOS updates & source code for Wolfenstein 3D Classic Platinum & DOOM Classic

Today we’ve got some great news for iOS fans. On the heels of recent updates (v 2.1) for both Wolfenstein 3D Classic Platinum & DOOM Classic, we’ve also released the 2.1 source code for both titles!

Wolfenstein 3D Platinum Source Code 2.1
DOOM Classic Source Code 2.1

What’s new in the update? Here’s the rundown…

  • Universal apps with iPad and Retina Display support
  • Revised user interface with a re-mastered HUD and all new menu art
  • Re-mastered sound from the original MIDI source files
  • Original cover art splash images
  • Locked 60 fps for Wolfenstein 3D Classic Platinum/Lite (improved framerate on all devices)
  • In App Purchase in Wolfenstein 3D Classic Lite to purchase the full game (Platinum Pack)
  • Wolfenstein 3D Classic Platinum/Lite now under the Apple 20MB download cap for 3G
  • Optimized for the iPad 2 (re-compiled under iOS 4.x with XCode 4.x)
  • Assorted bugfixes
  • Removed multiplayer support (currently broken due to iOS 3 and iOS 4 releases). MP will be re-released at a later date in a more robust fashion.

If you haven’t picked up these classics for your iOS device, check out the links below…

Wolfenstein 3D Classic Lite — Free!
Wolfenstein Classic Platinum — $1.99
DOOM Classic — $6.99

John Carmack discusses RAGE on iPhone / iPad / iPod

This month we’re privileged to share a special diary from the legendary John Carmack, technical director and co-founder of id Software. In addition to his current work on RAGE — coming to Xbox 360, Games for Windows, and PlayStation 3 on September 13, 2011 — and id Tech 5 technology, John has been working on an iPhone/iPad/iPod touch version of RAGE that will introduce gamers to the game’s story and world.

Round of applause for John Carmack…

RAGE for iPhone

Our mobile development efforts at id took some twists and turns in the last year. The plan was always to do something RAGE-related on the iPhone/iPad/iPod touch next, but with all the big things going on at id, the mobile efforts weren’t front and center on the priority list. There had been a bit of background work going on, but it was only towards the end of July that I was able to sit down and write the core engine code that would drive the project.

I was excited about how well it turned out, and since this was right before QuakeCon, I broke with tradition and did a live technology demo during my keynote. In hindsight, I probably introduced it poorly. I said something like “Its RAGE. On the iPhone. At 60 frames a second.” Some people took that to mean that the entire PC/console game experience was going to be on the iPhone, which is definitely not the case.

What I showed was a technology demo, written from scratch, but using the RAGE content creation pipeline and media. We do not have the full RAGE game running on iOS, and we do not plan to try. While it would (amazingly!) actually be possible to compile the full-blown PC/console RAGE game for an iPhone4 with some effort, it would be a hopelessly bad idea. Even the latest and greatest mobile devices are still a fraction of the power of a 360 or PS3, let alone a high end gaming PC, so none of the carefully made performance tradeoffs would be appropriate for the platform, to say nothing of the vast differences in controls.

What we do have is something unlike anything ever seen on the iOS platforms. It is glorious, and a lot of fun. Development has been proceeding at high intensity since QuakeCon, and we hope to have the app out by the end of November.

The technical decision to use our megatexture content creation pipeline for the game levels had consequences for its scope. The data required for the game is big. Really, really big. Seeing Myst do well on the iPhone with a 700 meg download gave me some confidence that users would still download huge apps, and that became the target size for our standard definition version, but the high definition version for iPad / iPhone 4 will be around twice that size. This is more like getting a movie than an app, so be prepared for a long download. Still, for perspective, the full scale RAGE game is around 20 gigs of data with JPEG-XR compression, so 0.7 gigs of non-transcoded data is obviously a tiny slice of it.

Since we weren’t going to be able to have lots of hugely expansive levels, we knew that there would be some disappointment if we went out at a high price point, no matter how good it looked. We have experimented with a range of price points on the iPhone titles so far, but we had avoided the very low end. We decided that this would be a good opportunity to try a  $0.99 SD / $1.99 HD price point.  We need to stay focused on not letting the project creep out of control, but I think people will be very happy with the value.

The little slice of RAGE that we decided to build the iPhone product around is “Mutant Bash TV”, a post apocalyptic combat game show in the RAGE wasteland. This is the perfect setup for a quintessential first person shooter game play experience — you pick your targets, aim your shots, time your reloads, dodge the bad guys, and try and make it through to the end of the level with a better score than last time. Beyond basic survival, there are pickups, head shots, and hit streak multipliers to add more options to the gameplay, and there is a broad range of skill levels available from keep-hitting-fire-and-you-should-make-it to almost-impossible.

A large goal of the project has been to make sure that the levels can be replayed many times. The key is making the gamplay itself the rewarding aspect, rather than story progression, character development, or any kind of surprises. Many of the elements that made Doom Resurrection good the first time you played it hurt the replayability, for instance. RAGE iOS is all action, all the time. I have played the game dozens of times, and testing it is still fun instead of a chore.

Technical Geek Details

The id Tech 5 engine uses a uniform paged virtual texture system for basically everything in the game. While the algorithm would be possible on 3GS and later devices, it has a substantial per-fragment processing cost, and updating individual pages in a physical texture is not possible with PVRTC format textures. The approach used for mobile RAGE is to do the texture streaming based on variable sized contiguous “texture islands” in the world. This is much faster, but it forces geometric subdivision of large surfaces, and must be completely predictive instead of feedback reactive. Characters, items, and UI are traditionally textured.

We build the levels and preview them in RAGE on the PC, then run a profiling / extraction tool to generate the map data for the iOS game. This tool takes the path through the game and determines which texture islands are going to be visible, and at what resolution and orientation. The pixels for the texture island are extracted from the big RAGE page file, then anisotropically filtered into as many different versions as needed, and packed into 1024×1024 textures that are PVRTC compressed for the device.

The packing into the textures has conflicting goals – to minimize total app size you want to cram texture islands in everywhere they can fit, but you also don’t want to scatter the islands needed for a given view into a hundred different textures, or radically change your working set in nearby views. As with many NP complete problems, I wound up with a greedy value metric optimizing allocation strategy.

Managing over a gig of media made dealing with flash memory IO and process memory management very important, and I did a lot of performance investigations to figure things out.

Critically, almost all of the data is static, and can be freely discarded. iOS does not have a swapfile, so if you use too much dynamic memory, the OS gives you a warning or two, then kills your process. The bane of iOS developers is that “too much” is not defined, and in fact varies based on what other apps (Safari, Mail, iPod, etc) that are in memory have done. If you read all your game data into memory, the OS can’t do anything with it, and you are in danger. However, if all of your data is in a read-only memory mapped file, the OS can throw it out at will. This will cause a game hitch when you need it next, but it beats an abrupt termination. The low memory warning does still cause the frame rate to go to hell for a couple seconds as all the other apps try to discard things, even if the game doesn’t do much.

Interestingly, you can only memory map about 700 megs of virtual address space, which is a bit surprising for a 32 bit OS. I expected at least twice that, if not close to 3 gigs. We sometimes have a decent fraction of this mapped.

A page fault to a memory mapped file takes between 1.8 ms on an iPhone 4 and 2.2 ms on an iPod 2, and brings in 32k of data. There appears to be an optimization where if you fault at the very beginning of a file, it brings in 128k instead of 32k, which has implications for file headers.

I am pleased to report that fcntl( fd, F_NOCACHE ) works exactly as desired on iOS – I always worry about behavior of historic unix flags on Apple OSs. Using this and page aligned target memory will bypass the file cache and give very repeatable performance ranging from the page fault bandwidth with 32k reads up to 30 mb/s for one meg reads (22 mb/s for the old iPod). This is fractionally faster than straight reads due to the zero copy, but the important point is that it won’t evict any other buffer data that may have better temporal locality. All the world megatexture data is managed with uncached reads, since I know what I need well ahead of time, and there is a clear case for eviction. When you are past a given area, those unique textures won’t be needed again, unlike, say monster animations and audio, which are likely to reappear later.

I pre-touch the relevant world geometry in the uncached read thread after a texture read has completed, but in hindsight I should have bundled the world geometry directly with the textures and also gotten that with uncached reads.

OpenAL appears to have a limit of 1024 sound buffers, which we bumped into. We could dynamically create and destroy the static buffer mappings without too much trouble, but that is a reasonable number for us to stay under.

Another behavior of OpenAL that surprised me was finding (by looking at the disassembly) that it touches every 4k of the buffer on a Play() command. This makes some sense, forcing it to page the entire thing into ram so you don’t get broken sound mixing, but it does unpredictably stall the thread issuing the call. I had sort of hoped that they were just eating the page faults in the mixing thread with a decent sized mix ahead buffer, but I presume that they found pathological cases of a dozen sound buffers faulting while the GPU is sucking up all the bus bandwidth or some such. I may yet queue all OpenAL commands to a separate thread, so if it has to page stuff in, the audio will just be slightly delayed instead of hitching the framerate.

I wish I could prioritize the queuing of flash reads – game thread CPU faults highest, sound samples medium, and textures lowest. I did find that breaking the big texture reads up into chunks helped with the worst case CPU stalls.

There are two project technical decisions that I fretted over a lot:

Because I knew that the basic rendering technology could be expressed with fixed function rendering, the game is written to OpenGL ES 1.1, and can run on the older MBX GPU platforms. While it is nice to support older platforms, all evidence is that they are a negligible part of the market, and I did give up some optimization and feature opportunities for the decision.

It was sort of fun to dust off the old fixed function puzzle skills. For instance, getting monochrome dynamic lighting on top of the DOT3 normal mapping in a single pass involved sticking the lighting factor in the alpha channel of the texture environment color so it feeds through to the blender, where a GL_SRC_ALPHA, GL_ZERO blend mode effects the modulation on the opaque characters. This sort of fixed function trickery still makes me smile a bit, but it isn’t a relevant skill in the modern world of fragment shaders.

The other big one is the codebase lineage.

My personally written set of iPhone code includes the renderer for Wolfenstein RPG, all the iPhone specific code in Wolfenstein Classic and Doom Classic, and a few one-off test applications. At this point, I feel that I have a pretty good idea of what The Right Thing To Do on the platform is, but I don’t have a mature expression of that in a full game. There is some decent code in Doom Classic, but it is all C, and I would prefer to do new game development in (restrained) C++.

What we did have was Doom Resurrection, which was developed for us by Escalation Studios, with only a few pointers here and there from me. The play style was a pretty close match (there is much more freedom to look around in RAGE), so it seemed like a sensible thing. This fits with the school of thought that says “never throw away the code” (http://www.joelonsoftware.com/articles/fog0000000069.html ). I take issue with various parts of that, and much of my success over the years has involved wadding things up and throwing it all away, but there is still some wisdom there.

I have a good idea what the codebase would look like if I wrote it from scratch. It would have under 100k of mutable CPU data, there wouldn’t be a resource related character string in sight, and it would run at 60 fps on new platforms / 30 fps on old ones. I’m sure I could do it in four months or so (but I am probably wrong). Unfortunately, I can’t put four months into an iPhone project. I’m pushing it with two months —  I have the final big RAGE crunch and forward looking R&D to get back to.

So we built on the Resurrection codebase, which traded expediency for various compromise in code efficiency. It was an interesting experience for me, since almost all the code that I normally deal with has my “coding DNA” on it, because the id Software coding standards were basically “program the way John does.”  The Escalation programmers come from a completely different background, and the codebase is all STL this, boost that, fill-up-the-property list, dispatch the event, and delegate that.

I had been harboring some suspicions that our big codebases might benefit from the application of some more of the various “modern” C++ design patterns, despite seeing other large game codebases suffer under them. I have since recanted that suspicion.

I whine a lot about it (occasionally on twitter), and I sometimes point out various object lessons to the other mobile programmers, but in the end, it works, and it was probably the right decision.

John Carmack 10-26-2010 via Bethblog.com

Development on Doom Classic iOS

By John Carmack, Technical Director, Id Software (2009)
http://www.idsoftware.com/doom-classic/doomdevelopment.htm

Way back in March when I released the source for Wolfenstein 3D Classic, I said that Doom Classic would be coming „real soon“, and on April 27, I gave a progress report:
http://www.idsoftware.com/iphone-doom-classic-progress/
I spent a while getting the multiplayer functionality up, and I figured I only had to spend a couple days more to polish things up for release.

However, we were finishing up the big iPhone Doom Resurrection project with Escalation Studios, and we didn’t want to have two Doom games released right on top of each other, so I put Doom Classic aside for a while.  After Doom Resurrection had its time in the sun, I was prepared to put the rest of the work into Doom Classic, but we ran into another schedule conflict.  As I related in my Wolf Classic notes http://www.idsoftware.com/wolfenstein-3d-classic-platinum/wolfdevelopment.htm , Wolfenstein RPG for the iPhone was actually done before Wolfenstein Classic, but EA had decided to sit on it until the release of the big PC / console Wolfenstein game in August.

I really thought I was going to go back and finish things up in September, but I got crushingly busy on other fronts.  In an odd little bit of serendipity, after re-immersing myself in the original Doom for the iPhone, I am now working downstairs at Id with the Doom 4 team.  I’m calling my time a 50/50 split between Rage and Doom 4, but the stress doesn’t divide.  September was also the month that Armadillo Aerospace flew the level 2 Lunar Lander Challenge:
Finally, in October I SWORE I would finish it, and we aimed for a Halloween release.  We got it submitted in plenty of time, but we ran into a couple approval hiccups that caused it to run to the very last minute.  The first was someone incorrectly thinking that the „Demos“ button that played back recorded demos from the game, was somehow providing demo content for other commercial products, which is prohibited.  The second issue was the use of an iPhone image in the multiplayer button, which we had to make a last minute patch for.

Release notes

Ok, the game is finally out (the GPL source code is being packaged up for release today).  Based on some review comments, there are a couple clarifications to be made:

Multiplayer requires a WiFi connection that doesn’t have UDP port 14666 blocked.  I’m quite happy with the simple and fast multiplayer setup, but it seems like many access points just dump the packets in the trash.  If the multiplayer button on the main menu doesn’t start pulsing for additional players after the first player has hit it, you won’t be able to connect.  I have also seen a network where the button would pulse, but the player would never get added to the player list, which meant that somehow the DNS packets were getting through, but the app packets weren’t.  It works fine on a normal AirPort install…  More on networking below.

I took out tilt-to-turn just to free up some interface screen space, because I didn’t know anyone that liked that mode, and my query thread on Touch Arcade didn’t turn up people that would miss it a lot.

Evidently there are a few people that do care a lot, so we will cram that back in on the next update.  The functionality is still there without a user interface, so you can enable it by four-finger-tapping to bring up the keyboard and typing „tiltturn 4000“ or some number like that, and it will stay set.  Make sure you have tiltmove pulled down to 0.  I never got around to putting in a real console, but you can change a few parameters like that, as well as enter all the original doom cheat codes like IDDQD, IDKFA, etc.

I think that the auto-centering control sticks in Doom Classic are a better control scheme than the fixed sticks from Wolf Classic.  The advice for wolf was to adjust the stick positions so that your thumbs naturally fell in the center point, so I just made that automatic for Doom.  Effective control always involved sliding your thumbs on the screen, rather than discretely tapping it, and this mode forces you to do that from the beginning.
Still, even if the new mode is some fraction „better“, there are a lot of people who have logged a lot of hours in Wolfenstein Classic, and any change at all will be a negative initially.  In the options->settings menu screen, there is a button labeled „Center sticks: ON“ that can be toggled off to keep the sticks fixed in place like in Wolf.

A subtle difference is that the turning sensitivity is now graded so that a given small movement will result in a specific percentage increase in speed, no matter where in the movement range it is.  With linear sensitivity, if you are 10 pixels off from the center and you move your thumb 10 pixels farther, then the speed exactly doubles.  If you are 50 pixels off from the center, the same 10 pixel move only increases your turning rate by 20%.  With ramped sensitivity, you would get a 20% (depending on the sensitivity scale) increase in speed in both cases, which tends to be better for most people.  You can disable this by toggling the „Ramp turn: ON“ option off.

In hindsight, I should have had a nice obvious button on the main options screen that said „Wolfenstein Style“ and had the same options, but I have always had difficult motivating myself to do good backwards compatibility engineering.  Even then, the movement speeds are different between the games, so it wouldn’t have felt exactly the same.

It was a lot of fun to do this project, working on it essentially alone, as a contrast to the big teams on the major internal projects.  I was still quite pleased with how the look and feel of the game holds up after so long, especially the „base style“ levels.  The „hell levels“ show their age a lot more, where the designers were really reaching beyond what the technology could effectively provide.

Future iPhone work

We do read all the reviews in the App store, and we do plan on supporting Doom Classic with updates.  Everything is still an experiment for us on the iPhone, and we are learning lessons with each product.  At this point, we do not plan on making free lite versions of future products, since we didn’t notice anything worth the effort with Wolfenstein, and other developers have reported similar findings.

We have two people at Id that are going to be dedicated to iPhone work.  I doubt I will be able to personally open Xcode again for a few months, but I do plan on trying to work out a good touch interface for Quake Classic and the later 6DOF games.  I also very much want to make at least a tech demo that can run media created with a version of our idTech 5 megatexture content creation pipeline.  I’m not sure exactly what game I would like to do with it, so it might be a 500 mb free gee-whiz app…

Wolfenstein Classic Platinum was a break-in opportunity for the new internal iPhone developers.  We were originally planning on making the Spear of Destiny levels available as in-app purchased content.  Then we decided to make it a separate „Platinum Edition“ application at a reasonable price.  Finally, we decided that we would just make it a free update, but something has gone wrong during this process — people who buy the app for the first time get everything working properly, but many people who upgrade the App from a previous purchase are seeing lots of things horribly broken.  We are working with Apple to try to debug and fix this, but the workaround is to uninstall the app completely, then reload it.  The exciting thing about Wolf Platinum is the support for downloadable levels, which is the beta test for future game capabilities.  Using a URL to specify downloadable content for apps is a very clean way to interface to the game through a web page or email message.

The idMobile team is finishing up the last of the BREW versions of Doom 2 RPG, and work has started on an iPhone specific version, similar to the Wolfenstein RPG release.  The real-time FPS games are never going to be enjoyable for a lot of people, and the turn based RPG games are pretty neat in many regards.  If they are well received, we will probably bring over the Orcs&Elves games as well.

I want to work on a Rage themed game to coincide with Rage’s release, but we don’t have a firm direction or team chosen for it.  I was very excited about doing a really-designed-for-the-iPhone first person shooter, but at this point I am positive that I don’t have the time available for it.

Networking techie stuff

I doubt one customer in ten will actually play a network game of Doom Classic, but it was interesting working on it.

Way back in March when I was first starting the work, I didn’t want the game to require 3.0 to run, and I generally try to work with the lowest level interfaces possible for performance critical systems, so I wasn’t looking at GameKit for multiplayer.  I was hoping that it was possible to use BSD sockets to allow both WiFi networking on 2.0 devices and WiFi or ad-hoc bluetooth on 3.0 devices.  It turns out that it is possible, but it wasn’t documented as such anywhere I could find.

I very much approve of Apple’s strategy of layering Obj-C frameworks on top of Unix style C interfaces.  Bonjour is a layer over DNS, and GameKit uses sockets internally.  The only bit of obscure magic that goes on is that the bluetooth IP interface only comes into existence after you have asked DNS to resolve a service that was reported for it.  Given this, there is no getting around using DNS for initial setup.

With WiFi, you could still use your own broadcast packets to do player finding and stay completely within the base sockets interfaces, and this might even make some sense, considering that there appear to be some WiFi access points that will report a DNS service’s existence that your app can’t actually talk to.

For every platform I have done networking on previously, you could pretty much just assume that you had the loopback interface and an Ethernet interface, and you could just use INADDR_ANY for pretty much everything.  Multiple interfaces used to just be an issue for big servers, but the iPhone can have a lot of active interfaces — loopback, WiFi Ethernet, Bluetooth Ethernet, and several point to point interfaces for the cellular data networks.

At first, I was excited about the possibility of multiplayer over 3G.  I had been told by someone at Intel that they were seeing ping times of 180 ms on 3G devices, which could certainly be made to work for gaming.

Unfortunately, my tests, here in Dallas at least, show about twice that, which isn’t worth fighting.  I’m a bit curious whether they were mistaking one-way times, or if the infrastructure in California is really that much better.  In any case, that made my implementation choice clear — local link networking only.

A historical curiosity: the very first release of the original Doom game on the PC used broadcast IPX packets for LAN networking.  This seemed logical, because broadcast packets for a network game of N players has a packet count of just N packets on the network each tic, since everyone hears each packet.  The night after we released the game, I was woken up by a call from a college sysadmin yelling at me for crippling their entire network.  I didn’t have an unlisted number at the time.  When I had decided to implement network gaming, I bought and read a few books, but I didn’t have any practical experience, so I had thought that large networks were done like the books explained, with routers connecting independent segments.  I had no idea that there were many networks with thousands of nodes connected solely by bridges that forwarded all broadcast packets over lower bit rate links.  I quickly changed the networking to have each peer send addressed packets to the other peers.  More traffic on the local segment, but no chance of doing horrible things to bridged networks.

WiFi is different from wired Ethernet in a few ways.  WiFi clients don’t actually talk directly to each other, they talk to the access point, which rebroadcasts the packet to the destination, so every packet sent between two WiFi devices is actually at least two packets over the air.

An ad-hoc WiFi network would have twice the available peer to peer bandwidth and half the packet drop rate that an access point based one does.  Another point is that unlike wired Ethernet, the WiFi link level actually does packet retransmits if the destination doesn’t acknowledge receipt.  They won’t be retransmitted forever, and the buffer spaces are limited, so it can’t be relied upon the way you do TCP, but packet drops are more rare than you would expect.  This also means that there are lots of tiny little ACK packets flying around, which contributes to reduced throughput.  Broadcast packets are in-between — the broadcast packet is sent from the source to the access point with acknowledgment and retransmits, but since the access point can’t know who it is going to, it just fires it out blind a single time.

I experimentally brought the iPhone networking up initially using UDP broadcast packets, but the delivery was incredibly poor.  Very few packets were dropped, but hundreds of milliseconds could sometimes go by with no deliveries, then a huge batch would be delivered all at once.  I thought it might be a policy decision on our congested office access point, but it behaved the same way at my house on a quiet LAN, so I suspect there is an iPhone system software issue.  If I had a bit more time, I would have done comparisons with a WiFi laptop.  I had pretty much planned to use addressed packets anyway, but the broadcast behavior was interesting.

Doom PC was truly peer to peer, and each client transmitted to every other client, for N * (N-1) packets every tic.  It also stalled until valid data had arrived from every other player, so adding more players hurts in two different ways — more packets = more congestion = more likely to drop each individual packet.  The plus side of an arrangement like this is that it is truly fair, no client has any advantage over any other, even if one or more players are connected by a lower quality link.  Everyone gets the worst common denominator behavior.

I settled on a packet server approach for the iPhone, since someone had to be designated a „server“ anyway for DNS discovery, and it has the minimum non-broadcast packet count of 2N packets every tic.  Each client sends a command packet to the server each tic, the server combines all of them, then sends an addressed packet back to each client.  The remaining question was what the server should do when it hasn’t received an updated command from a client.  When the server refused to send out a packet until it had received data from all clients, there was a lot more variability in the arrival rate.  It could be masked by intentionally adding some latency on each client side, but I found that it plays better to just have the server repeat the last valid command when it hasn’t gotten an update.  This does mean that there is a slight performance advantage to being the server, because you will never drop an internal packet.

The client always stalls until it receives a server packet, there was no way I had the time to develop any latency reducing / drop mitigating prediction mechanisms.  There are a couple full client / server, internet capable versions of Doom available on the PC, but I wanted to work from a more traditional codebase for this project.

So, I had the game playing well over WiFi, but communication over the Bluetooth interface was significantly worse.  There was an entire frame of additional latency versus WiFi, and the user mode Bluetooth daemon was also sucking up 10% of the CPU.  That would have been livable, but there were regular surges in the packet delivery rate that made it basically unplayable.

Surging implies a buffer somewhere backing up and then draining, and I had seen something similar but less damaging occasionally on WiFi as well, so I wondered if there might be some iPhone system communication going on.  I spent a little while with WireShark trying to see if the occasional latency pileup was due to actual congestion, and what was in the packets, but I couldn’t get my Macbook into promiscuous WiFi mode, and I didn’t have the time to configure a completely new system.

In the end, I decided to just cut out the Bluetooth networking and leave it with WiFi.  There was a geek-neatness to having a net game with one client on WiFi and another on Bluetooth, but I wasn’t going to have time to wring it all out.

John Carmack on DOOM Classic Development, Fan Questions

05.11.2009 via Bethblog.com

Now that DOOM Classic has been released on iTunes, John Carmack has written his development notes on the project which you can read here.

This week we took some questions for John from the id Software twitter feed, and you will find his answers after the jump.

Khct: What was the most fun and/or rewarding part of developing DOOM Classic?
Porting a game is a very different experience from developing a new game. With a port, after working for a while, there is a moment when BANG — the game is there. After that, it is mostly a matter of fixing problems, optimizing, and polishing the experience. This project was especially rewarding because I felt that I was being extremely productive on the days that I was working on it — I was probably the best person in the world to do that work at that time, and I was knocking out issues left and right.

Matthaigh: Did you encounter any problems when porting the code to iPhone/touch? Such as APIs you used?
It went very smoothly. The prBoom codebase that I based it on already compiled for OS X, so there wasn’t much grunt work, and I had all the device specific IO code that I developed for Wolfenstein Classic. Being able to take advantage of the GPL code that other people have maintained and improved over the years has been very satisfying for me. I always argued that we got worthwhile intangible benefits from my policy of releasing the source code to the older games, but with Wolfenstein Classic and DOOM Classic I can now point to significant amounts of labor that I was personally saved. In fact, the products probably never would have existed at all if my only option was to work from the original “dusty deck” source code for the games. If we were even able to find the original code at all. Hooray for open source!

Seventhcycle: What sort of synth did you use for DOOM Classic’s music? It sounds like AWE32 or AWE64. Why not use something like a SDD4?

The following answer is from Christian Antkow, Aural Assault Technician, id Software

Where possible, I pulled Redbook Audio off Bobby Prince’s archival CD’s and I can only assume it was created with a Creative AWE32 or some other comparable relic back in the day. Where I did not have original Redbook audio to work with, I went back to the original MIDI’s and ran them through a modern Creative X-Fi General MIDI module in an attempt to match the original Redbook audio as closely as possible. My original intent was to use the East/West Colossus Virtual Instrument in my DAW, and run all the MIDI through their GM instruments to generate completely new high quality renderings. When I did so with the first couple tracks, it just sounded way too “real” and all sense of nostalgia was lost.

So, that’s really the main reason I didn’t redo everything in a modern instrument. All sense of nostalgia would be lost and I felt that needed to be retained.

Kevinquilen: After all these years, how do you feel that people are still interested in DOOM and Wolf 3-D up against newer games?
Nostalgia undoubtedly plays a part, but I do think that the games hold there own and then some for playability. The iPhone market is full of titles that are heavy on aesthetics and light on quality gameplay. There is a LOT of gameplay in the old titles, and I spent a significant amount of effort to make sure that they play well on the new platform.

Thanks to all of the Twitter followers who submitted their questions, and of course John and Christian for answering them!

Orcs & Elves – John Carmack’s Blog

Orcs & Elves – John Carmack’s Blog

May 2nd, 2006 – via armadilloaerospace.com

I’m not managing to make regular updates here, but I’ll keep this around just in case. I have a bunch of things that I want to talk about — some thoughts on programming style and reliability, OpenGL, Xbox 360, etc, but we have a timely topic with the release of our second mobile game, Orcs & Elves, that has spurred me into making this update.

DoomRPG, our (Id Software’s and Fountainhead Entertainment’s) first mobile title, has been very successful, both in sales and in awards. I predict that the interpolated turn based style of 3D gaming will be widely adopted on the mobile platform, because it plays very naturally on a conventional cell phone. Gaming will be a lot better when there is a mass market of phones that can be played more like a gamepad, but you need to make do with what you actually have.

One of the interesting things about mobile games is that the sales curve is not at all like the drastically front loaded curve of a PC or console game. DoomRPG is selling better now than when it was initially released, and the numbers are promising for supporting additional development work. However, unless I am pleasantly surprised, the hardware capabilities are going to advance much faster than the market in the next couple years, leading to an unusual situation where you can only afford to develop fairly crude games on incredibly powerful hardware. Perhaps „elegantly simple“ would be the better way of looking at it, but it will still wind up being like developing an Xbox title for $500,000. That will wind up being great for many small game companies that just want to explore an idea, but having resource far in excess of your demands does minimize the value of being a hot shot programmer. :-)

To some degree this is already the case on high end BREW phones today. I have a pretty clear idea what a maxed out software renderer would look like for that class of phones, and it wouldn’t be the PlayStation-esq 3D graphics that seems to be the standard direction. When I was doing the graphics engine upgrades for BREW, I started along those lines, but after putting in a couple days at it I realized that I just couldn’t afford to spend the time to finish the work. „A clear vision“ doesn’t mean I can necessarily implement it in a very small integral number of days. I wound up going with a less efficient and less flexible approach that was simple and robust enough to not likely need any more support from me after I handed it over (it didn’t).

During the development of DoomRPG, I had commented that it seemed obvious that it should be followed up with a „traditional, Orcs&Elves sort of fantasy game“. A couple people independently commented that „Orcs&Elves“ wasn’t a bad name for a game so since we didn’t run into any obstacles, Orcs& Elves it was. Naming new projects is a lot harder than most people think, because of trademark issues.

In hindsight, we made a strategic mistake at the start of O&E development. We were fresh off the high end BREW version of DoomRPG, and we all liked developing on BREW a lot better than Java. It isn’t that BREW is inherently brilliant, it just avoids the deep sucking nature of java for resource constrained platforms (however, note the above about many mobile games not being resource constrained in the future), and allows you to work inside visual studio. O&E development was started high-end first with the low-end versions done afterwards. I should have known better (Anna was certainly suspicious), because it is always easier to add flashy features without introducing any negatives than it is to chop things out without damaging the core value of a game. The high end version is really wonderful, with all the graphics, sound, and gameplay we aimed for, but when we went to do the low end versions, we found that even after cutting the media as we planned, we were still a long way over the 280k java application limit. Rather than just butchering it, we went for pain, suffering, and schedule slippage, eventually delivering a game that still maintained high quality after the de-scoping (the low end platforms still represent the majority of the market). It would have been much easier to go the other way, but the high end phone users will be happy with our mistake.

DoomRPG had three base platforms that were customized for different phones — Java, low end BREW, and high end BREW. O&E added a high end java version that kept most of the quality of the high end BREW version on phones fast enough to support it from carriers willing to allow the larger download. The download size limits are probably the most significant restriction for gaming on the high end phones. I don’t really understand why the carriers encourage streaming video traffic, but balk at a couple megs of game media.

I am really looking forward to the response to Orcs&Elves, because I think it is one of the best product evolutions I have been involved in. The core game play mechanics that were laid out in DoomRPG have proven strong and versatile (again, I bet we have a stable genre here), but now we have a big bag of tricks and a year of polishing the experience behind us, along with a world of some depth. I found it a very good indicator that play testers almost always lost track of time while playing.

This project was doubly nostalgic for me — the technology was over a decade old for me, but the content took me back twenty years. All the computer games I wrote in high school were adventure games, and my first two commercial sales were Ultima style games for the Apple II, but Id Software never got around to doing one. Old timers may recall that we were going to do a fantasy game called „The Fight For Justice“ (starring a hero called Quake…) after Commander Keen, but Wolfenstein 3D and the birth of the FPS sort of got in the way. :-)

John Carmack

Cell phone adventures – John Carmack’s Blog

Cell phone adventures – John Carmack’s Blog

March 27th, 2005 – via armadilloaerospace.com

I’m not a cell phone guy. I resisted getting one at all for years, and even now I rarely carry it. To a first approximation, I don’t really like talking to most people, so I don’t go out of my way to enable people to call me. However, a little while ago I misplaced the old phone I usually take to Armadillo, and my wife picked up a more modern one for me. It had a nice color screen and a bunch of bad java game demos on it. The bad java games did it.

I am a big proponent of temporarily changing programming scope every once in a while to reset some assumptions and habits. After Quake 3, I spent some time writing driver code for the Utah-GLX project to give myself more empathy for the various hardware vendors and get back to some low-level register programming. This time, I decided I was going to work on a cell phone game.

I wrote a couple java programs several years ago, and I was left with a generally favorable impression of the language. I dug out my old “java in a nutshell” and started browsing around on the web for information on programming for cell phones. After working my way through the alphabet soup of J2ME, CLDC, and MIDP, I’ve found that writing for the platform is pretty easy.

In fact, I think it would be an interesting environment for beginning programmers to learn on. I started programming on an Apple II a long time ago, when you could just do an “hgr” and start drawing to the screen, which was rewarding. For years, I’ve had misgivings about people learning programming on Win32 (unix / X would be even worse), where it takes a lot of arcane crap just to get to the point of drawing something on the screen and responding to input. I assume most beginners wind up with a lot of block copied code that they don’t really understand.

All the documentation and tools needed are free off the web, and there is an inherent neatness to being able to put the program on your phone and walk away from the computer. I wound up using the latest release of NetBeans with the mobility module, which works pretty well. It certainly isn’t MSDev, but for a free IDE it seems very capable. On the downside, MIDP debugging sessions are very flaky, and there is something deeply wrong when text editing on a 3.6 ghz processor is anything but instantaneous.

I spent a while thinking about what would actually make a good game for the platform, which is a very different design space than PCs or consoles. The program and data sizes are tiny, under 200k for java jar files. A single texture is larger than that in our mainstream games. The data sizes to screen ratios are also far out of the range we are used to. A 128x128x16+ bit color screen can display some very nice graphics, but you could only store a half dozen uncompressed screens in your entire size budget. Contrast with PCs, which may be up to a few megabytes of display data, but the total game data may be five hundred times that.

You aren’t going to be able to make an immersive experience on a 2” screen, no matter what the graphics look like. Moody and atmospheric are pretty much out. Stylish and fun is about the best you can do.

The standard cell phone style discrete button direction pad with a center action button is a good interface for one handed navigation and selection, but it sucks for games, where you really want a game boy style rocking direction pad for one thumb, and a couple separate action buttons for the other thumb. These styles of input are in conflict with each other, so it may never get any better. The majority of traditional action games just don’t work well with cell phone style input.

Network packet latency is bad, and not expected to be improving in the foreseeable future, so multiplayer action games are pretty much out (but see below).

I have a small list of games that I think would work out well, but what I decided to work on is DoomRPG – sort of Bard’s Tale meets Doom. Step based smooth sliding/turning tile movement and combat works out well for the phone input buttons, and exploring a 3D world through the cell phone window is pretty neat. We talked to Jamdat about the business side of things, and hired Fountainhead Entertainment to turn my proof-of-concept demo and game plans into a full-featured game.

So, for the past month or so I have been spending about a day a week on cell phone development. Somewhat to my surprise, there is very little internal conflict switching off from the high end work during the day with gigs of data and multi-hundred instruction fragment shaders down to texture mapping in java at night with one table lookup per pixel and 100k of graphics. It’s all just programming and design work.

It turns out that I’m a lot less fond of Java for resource-constrained work. I remember all the little gripes I had with the Java language, like no unsigned bytes, and the consequences of strong typing, like no memset, and the inability to read resources into anything but a char array, but the frustrating issues are details down close to the hardware.

The biggest problem is that Java is really slow. On a pure cpu / memory / display / communications level, most modern cell phones should be considerably better gaming platforms than a Game Boy Advanced. With Java, on most phones you are left with about the CPU power of an original 4.77 mhz IBM PC, and lousy control over everything.

I spent a fair amount of time looking at java byte code disassembly while optimizing my little rendering engine. This is interesting fun like any other optimization problem, but it alternates with a bleak knowledge that even the most inspired java code is going to be a fraction the performance of pedestrian native C code.

Even compiled to completely native code, Java semantic requirements like range checking on every array access hobble it. One of the phones (Motorola i730) has an option that does some load time compiling to improve performance, which does help a lot, but you have no idea what it is doing, and innocuous code changes can cause the compilable heuristic to fail.

Write-once-run-anywhere. Ha. Hahahahaha. We are only testing on four platforms right now, and not a single pair has the exact same quirks. All the commercial games are tweaked and compiled individually for each (often 100+) platform. Portability is not a justification for the awful performance.

Security on a cell phone is justification for doing something, but an interpreter isn’t a requirement – memory management units can do just as well. I suspect this did have something to do with Java’s adoption early on. A simple embedded processor with no MMU could run arbitrary programs securely with java, which might make it the only practical option. However, once you start using blazingly fast processors to improve the awful performance, a MMU with a classic OS model looks a whole lot better.

Even saddled with very low computing performance, tighter implementation of the platform interface could help out a lot. I’m not seeing very conscientious work on the platforms so far. For instance, there is just no excuse for having 10+ millisecond granularity in timing. Given that the java paradigm is sort of thread-happy anyway, having a real scheduler that Does The Right Thing with priorities and hardware interfacing would be an obvious thing. Pressing a key should generate a hardware interrupt, which should immediately activate the key listening thread, which should be able to immediately kill an in-process rendering and restart another one if desired. The attitude seems to be 15 msec here, 20 there, stick it on a queue, finish up a timeslice, who cares, right?

I suspect I will enjoy working with BREW, the competing standard for cell phone games. It lets you use raw C/C++ code, or even, I suppose, assembly language, which completely changes the design options. Unfortunately, they only have a quarter the market share that the J2ME phones have. Also, the relatively open java platform development strategy is what got me into this in the first place – one night I just tried writing a program for my cell phone, which isn’t possible for the more proprietary BREW platform.

I have a serious suggestion for the handset designers to go with my idle bitching. I have been told that fixing data packet latency is apparently not in the cards, and it isn’t even expected to improve much with the change to 3G infrastructure. Packet data communication seems more modern, and has the luster of the web, but it is worth realizing that for network games and many other flashy Internet technologies like streaming audio and video, we use packets to rather inefficiently simulate a switched circuit.

Cell phones already have a very low latency digital data path – the circuit switched channel used for voice. Some phones have included cellular modems that use either the CSD standard (circuit switched data) at 9.8Kbits or 14.4Kbits or the HSCSD standard (high speed circuit switched data) at 38.4Kbits or 57.6Kbits. Even the 9.8Kbit speed would be great for networked games. A wide variety of two player peer-to-peer games and multiplayer packet server based games could be implemented over this with excellent performance. Gamers generally have poor memories of playing over even the highest speed analog modems, but most of the problems are due to having far too many buffers and abstractions between the data producers/consumers and the actual wire interface. If you wrote eight bytes to the device and it went in the next damned frame (instead of the OS buffer, which feeds into a serial FIFO, which goes into another serial FIFO, which goes into a data compressor, which goes into an error corrector, and probably a few other things before getting into a wire frame), life would be quite good. If you had a real time scheduler, a single frame buffer would be sufficient, but since that isn’t likely to happen, having an OS buffer with accurate queries of the FIFO positions is probably best. The worst gaming experiences with modems weren’t due to bandwidth or latency, but to buffer pileup.