Wednesday, October 24, 2007

A Different Take on the Demise of OiNK

This afternoon, popular alt/indie music scene blog Stereogum highlighted one user's reaction to the arrest of OiNK's creator and the closing of their file forums. I thought it was an important enough perspective that it deserved a repost:

Yes, it provided a way to get free versions of widely available popular albums, but it also archived and cataloged the last 50 years of music better than any other place on Earth. Many of which are not readily available for purchase anywhere. It was an excellent record of one field of human achievement and now its gone ... How about the Clash's "Vanilla Tapes" that were lost on a subway train 30 years ago? On Oink, but not in stores.

It was the digital music version of the burning of the Library at Alexandria.

They destroyed the greatest historical archive of rock so they could make a couple more bucks off Rhianna's "Umbrella".

He has a very interesting point: Music represents a very important part of our culture, and as such a dynamic and quickly growing form of culture it's very difficult to create a complete repository of it. Additionally, the copyright holders are not keen on the idea of having repositories similar to libraries at which interested parties could "borrow" and listen to music not otherwise available.

Now, if you'll excuse me, I am going to use this fine gentleman's post to kick off my own.

This is not unlike the availability of written works. Because the very vast majority of published written works are no longer under copyright, publishing companies no longer have any reason to attempt to print and sell them. The Library of Congress used to have a very complete collection of works published (and previously published) in the United States. In fact, The Library of Congress used to require the submission of a copy of every piece of writing published in the US. This is no longer the case, but the Library of Congress still represents (probably) the most complete collection of written works in the United States.

The Public Library system begins to address this issue, but only really for writing. Public Libraries don't have particularly good collections of modern music.

So what does OiNK represent? In a lot of ways, OiNK was an interesting allegory for music to the way that the Library of Congress used to operate for written works. Every time an album came out, important or not, popular or not, it was probably posted on OiNK. Because the userbase had a large number of aural enthusiasts, much of the influential and important (popular or not) music of the last fifty years - in fact, most of the music published in some sort of digital form - could be found on the forums. Even better, the majority of the music posted was of archival quality, simply due to the stringent posting requirements.

The same argument could be made for movies. It seems as though the culture-creation companies, as they dub themselves, would like to put a minimum price on culture. Is this really the way to foster a creative culture?

Let's consider the CD. Ten years ago, people, enthusiasts included, purchased CDs to listen to music. Today, the purchase of a CD has begun to transform into something very different: a means to support an artist. The purchase of a CD or a record has become the same action as purchasing a pin or a t-shirt. Music is free, and the way we show our appreciation for the music we enjoy is by buying merchandise from the band.

Let me repeat that: Music is free. It's free. Buying music has turned into something you do when you like it, instead of something you do when you want to listen to it.

So what did we lose with OiNK? We lost, just as the poster said, the largest and most complete repository of modern music in existence. Does this mean that music is no longer free? No. Does it mean that albums will no longer leak onto the internet? No.

What did the Record Companies who shut down OiNK gain? They gained the ability to marginalize independent labels just a little more by removing one of the best passive marketing tools that independent artists have. One thing that they definitely didn't gain was more CD sales. I guess it was worth it, since more and more artists are signing to independent labels instead of the big four.

What did the Record Companies who shut down OiNK lose? They lost face, and they lost, arguably, one of the most important marketing tools that they have so far refused to utilize. What's more important, or more profitable than a CD sale? A fan. And if the music isn't good enough to create a fan out of a listener, shutting down OiNK isn't going to increase CD sales.

So the record companies also lost something else: They lost an indefinite number of fans. Fans are people who hadn't heard the music yet, but who would have gotten into it if they had. Fans are the people who go to concerts and buy merchandise. Fans are the people who will buy the same song as a Single, on a CD, and on a record because the art is different.

Fans make you a whole lot more money per person in this day of free music than the average consumer. Maybe the record companies should consider that when they slap fans in the face and say "stop stealing."

Thursday, March 29, 2007

Inaugural Innovation Showcase at USC

Yesterday at the USC Innovation Showcase I had a chance to see many different emerging technologies. Some were subtle in their exposition but grand in theme, and others were just grand.

From Berok Khoshneves’ Contour Crafting technology, which builds a house in 24 hours at 25% of the normal cost, to methanol fuel cells that promise to create energy without any of the problems inherent in the storage, transportation, or use of hydrogen, the conference was chock-full of interesting technology.

I’d say Chris Swain, Todd Caranto and I set up a very well-balanced booth. Up at the top we had the USC for Games banner, with the two large framed pictures of Cloud and everyone reclining in ZML. On one side we had a live demo of fl0w on a PS3 and videos of Immune Attack looping, and on the other we live demos of the Redistricting Game, ELECT and a rotating presentation of New New Deal. With so much interactive, how can you go wrong?

One of the more interesting booths at the conference, in my opinion, was Eric Hanson’s booth on Panoramic and Gigapixel images. Very interesting stuff. But more interesting was something that they did with the pictures: he created 3d maps of the high-resolution pictures which gave them depth and allow a camera to move into an image – even a time lapsed image! He spoke about a month ago at an IMD Forum, if anyone recalls.

This allows for some interesting possible applications (all with clever coding, of course), the most unwieldy being closed sets for video games. Imagine creating a set, then taking a very high resolution panoramic picture of it, and compositing it (and its 3d-map) with additional high resolution panoramic pictures.

This would be more ideal for large outdoor areas and creates very interesting prospects for backdrops and backgrounds that no longer require powerful video cards to render while still giving an incredible level of detail. Instead of worrying about draw distances and available video RAM, you can create beautiful panoramic environments from real places – play a game set in rolling hills in which you can actually see the rolling hills – without drawing textures miles into the distance. If you utilize a composite of multiple time-lapsed images you can even give the mountains in the distance character and depth. There are some very compelling arguments for the use of these images and 3d maps in future games and projects.

Another cool booth was one that showcased some technology that allowed the combination of multiple cameras grafted onto the 3d map of an area to give a real time image of what a building or street looks like. The first thing I thought of when I saw this was playing games with teams of people running around USC trying to capture certain buildings from the other team. Give one team red shirts and the other green shirts, write a little code to track specific colors, and we could have a big game of capture the flag on campus! Fun stuff.

But even in light of this stuff, I think that we had the coolest booth there. Go team innovation!

Sunday, February 18, 2007

Slamdance Post-Mortem Forum Reaction

A couple of days ago I attended an open forum discussing Slamdance’s removal of Super Columbine Massacre RPG (SCMRPG from now on) from its finalists this year. There were some very interesting topics that game up, but there were a couple of things that I think could be considered in a little more depth. I’d like to preface the following by saying that I don’t think SCMRPG should have been pulled from Slamdance after being awarded finalist status – much of the discussion regarding one Peter Baxter (and his decision) gave the impression that he had simply pulled the game due to personal aversion to the game’s content.

The first thing is that creation is art in the most pure sense. SCMRPG had a point – the creator of the game had a reason for making it. This doesn’t make the game good, respectable, or worthy of praise; it just makes it a game. Why should this game be treated differently from other games? There are a couple of questions that stem from this: Is SCMRPG in good taste? Is it a worthy representation of the event – worthy enough to avoid criticism due to its content? Perhaps the questions are part of a bigger uncertainty, which deals with whether games are a medium that can currently portray violence in an artistic sense without receiving criticism.

This was a very important point that Julian Bleecker brought up in the background chat that I don’t think was very well addressed: That games as a medium, right now, might not be capable of handling sensitive subjects that are violent in nature because of the popular opinion held by the public that games trivialize violence. This is not to say that we can’t do it, it just says that people are, as a whole, unwilling to accept them. What I don’t think was addressed was a plan of action to try and bring games up to the level of other mediums of expression. Right now we don’t have it, and while it is certainly possible, I think it’s important to keep this in mind as we all tiptoe around the idea of designing and selling a game that addresses very real and serious issues with culture, society, and war.

Maybe a war game that doesn’t include killing? A first person game about loss and emotional trials instead of leveling up. But how would that work?

Jenova mentioned something that ran very much along these lines – he expressed that the player might not be ready to play a game that was sad. He said that in his opinion, the gamer community wasn’t necessarily in a position in terms of development that would allow them to enjoy playing a drama. I think this illustrates perfectly the need for a slow shift from current constructs to more emotional constructs. He mentioned that he was attempting to do that by integrating positive emotions like love and gain into his game mechanics. He doesn’t think negative emotions are viable right now in the community. I think he made a very good point.

SCMRPG is arguably the most important game of the last couple of years. Not because of its gameplay or because of how awesome it is – I played it last night after the forum and it’s nothing to write home about. But the content is polarizing, and because this is getting so much attention, it means that games are getting attention; and the question of whether or not games are appropriate for this type of expression is being asked. Some people are saying “yes” and others are saying “no,” but far more people are saying yes than no, and in many cases, the no is conditional with a “not yet” clause. We are moving forward by leaps and bounds.

I do think that it will be a slow progression from here to there. I do think that we can do it, and I do think that it’s in the cards for the next couple of years. Unfortunately, I don’t think that we’re in a position right now to make games like this and receive positive press for it – the problem of people looking at the history of games and saying “What’s the difference between this and Quake? This is an insult to those who died” is real and unavoidable.

There was a correlation made between the film industry and the game industry – specifically Charlie Chaplan. He was a star, and he started off with comedy. After a time he started addressing more controversial issues. Others followed, and over the course of ten years we had a strong movement to Film addressing serious social and political issues. I can see no current impediments to interactive whose counterparts were not overcome by film in the past.

I think a good question to be asking is not “can we do this” but “how do we do this?” What is the progression? Can we just start right now and make a game about racism or terrorism? What elements of the interactive form to we still need to master in order to create an atmosphere that is touching instead of offensive?

And I don’t necessarily think that we need a “Will Wright” to create that one game that shifts popular opinion. In fact, I don’t think that it will be only one game. It will start with one, then two, then ten. Slowly, these themes will proliferate the market and people will become acclimated to the idea of games portraying important events and conveying important themes. Even touchy ones. We have the ability, it will just take time.

Wednesday, January 10, 2007

What are the differences here 1: Literature and Interactive

Recently I was reading a book (Slapstick) by one of my favorite authors, Kurt Vonnegut, and I decided to try and outline why exactly I liked him so much – in 50 words or less. What I came up with was this, in 8 words: He lets my brain play with his words.

Now let’s get in to what I think this means. The human brain organizes information and paints pictures using two things: visuals and emotions. Visuals represent the author telling the reader about the setting, situation, and the actions of characters: the tall golden red bridge that the protagonist is walking over (the wind was blowing through the trees loosening the golden brown leaves that finally signified the end of summer in Golden Gate Park). Emotions are more difficult to communicate, but are also (usually) communicated through the same channels – some authors are better at this than others. The combination of these two things allows us to develop an idea of what is going on in a story, or a dream, or any other visualization. Action (shallow) with emotion (breadth/depth/meaning). Rinse and repeat.

Don’t forget that there’s no real reader choice in how the story progresses, so the author can set up events which will have an emotional effect on the reader (choose your own adventure books don’t count).

Of course, all of these visualizations are taking place in our brains. This means that any time we assimilate information, it flows through a filter that applies our own personality, memories, and desires to that information transparently. When a character appears in a story, sometimes the author will provide information as to his or her appearance. Sometimes the author will not. In this case, our brain will make up the difference and provide us with an image of what we believe the character looks like. This is far more personal than in a Movie or Video Game where the author and not the narrative/player determine what the main character looks like (MMOs and Sims are an exception, but where is the emotion in those games?).

In Vonnegut’s writing he adopts a very casual tone, as if he’s speaking to the reader. Actions are outlined (sometimes), but more often than not actions are not the most important part of the story. The thoughts and feelings of the protagonist are what are communicated, along with some settings and actions in the background. But the emphasis is not on these actions, it’s on the mental and physical state of the main character.

In interactive systems, the game provides the visuals, the sounds and the actions (in the form of visual representation). Each of these is concrete and not open to much interpretation – we can see it on the screen and hear it from the speakers. The only thing left for the player to interpret is the emotion, and usually this is helped along by suspension of disbelief. There’s not a lot of imagination there. So I guess the question we have to ask is: How do you suck players into a story that can’t really be personalized?

“Well look at Final Fantasy,” you might say. That doesn’t really count because the story is largely told through pre-rendered cinematics that are so similar to movies there is virtually no distinction. Then we have some titles coming out this year such as Hellgate: London and Crysis that promise to provide two different ways of sucking the player in a little more.

Hellgate is (purportedly) going to provide a slightly different path through the game to each new player in the form of randomly generated levels. This way each player plays a different form of the same game. This doesn’t mean anything though, because some players will respond to the game and some won’t. The game is not really intelligent. Maybe this is a step in the right direction, but instead of addressing the problem it really only examines ways to expand the gameplay experience within the same construct.

Crysis is a little different in a not-so-cool way. Basically it allows the environment to be nearly completely destructible by the player. This provides a different kind of immersion from Hellgate, but really it’s the same concept: How can we make things better without changing what we’re doing?

I think there are a couple of ways to change how this works, and one of them can be done within the current system.

The Laid-Back Approach

Let’s imagine a game that can change the way that the game looks and feels based on the player who is engaged. To simplify this concept, let’s just say that the player puts on a little skull cap with 2500 wires coming out of it and acts like a lobotomy patient every time they want to play. This way, the game could gauge reactions to content based on the actual response of the player’s brain. After a little tutorial, the game would have an idea of who the player is and how to personalize the game to mean more to the player. This way, visuals could be designed and colored so that they have the desired emotional response from a given player instead of being constant to all players.

The More Hardcore Approach

This has more to do with putting players in a place that they will most likely feel uncomfortable in order to illustrate a point. The game would force them to do things that emotionally jar them, and while using a method like the one above would help to determine that, it’s a lot easer to surprise someone politically or socially than it is to appeal to their softer, more vulnerable inner core. For this reason, this method is a little more reasonable to think about in terms of what’s possible and likely with technology widely available (and priced at a level people can afford).

Another Possible Approach

Back to Vonnegut. How can we apply principles taught to us by current authors to game development? We already determined that a lot of the reason that books make us respond emotionally is due to the personal aspect of all the characters and situations. We feel for the characters – no one feels for the characters in games. Further, the reason that we feel for them in books is because the human in the equation has to make connections between situations and prior knowledge. The reader is forced unknowingly to apply his or her own interpretation to the content of the story or novel. This doesn’t occur in games because the story, visuals, and characters are already concrete – there’s not very much room for personal interpretation.

I guess what I’m getting at is that maybe, if we remove something from games, we might leave a little room for the player’s brain to fill in. Puzzles are great, but it’s important to give the player a more active roll in determining what things mean. Maybe, instead of making a game fun for the reason they’re already fun, we can make games fun by using the same techniques that make books fun to read. Then we can use games to tell stories and communicate real meaning in a tasteful way.

Thursday, October 19, 2006

2007: The Year of Linux on the Desktop? (hah)

Can you hear that? It’s still faint, but you can hear it if you listen closely. It’s the sound of a million nerds clapping. Sony has been talking for months about the possible availability of a distribution of Linux for their super sore-away PS3, and now it looks like that will actually come to pass. Linus, your operating system may soon see the light of day.

Intel, Microsoft, Dell and HP Squirm

On October 17th, a company called Terra Soft circulated a press release stating that they would be releasing a distribution of Linux that will run on the hot PS3 hardware. It’s called Yellow Dog Linux, and they’re claiming compatibility for a couple of very important applications., Firefox, and Thunderbird will all run like a charm on the PS3, or so they say. What’s that? Internet, email, and your term paper. Now, Linux isn’t quite ready for prime-time yet, it’s still a little complicated for the average user. But this will change. And when it does, people will have one less reason to buy a desktop PC, and one more reason to buy a PS3.

That means a slow trend from Desktop PCs to Consoles, with Laptops the only mainstay of big OEMs like Dell and HP.

Two Sides of the Fight

This has a couple of other implications besides just Sunday-night-term-papers-on-the-ps3. The world of Desktop PCs has several things on the line here, and PC gaming hangs in the balance. There are two possible outcomes here: First, the PS3 usurps the Desktop PC and no one buys them anymore; second, balance remains and people still buy desktops. Now, with recent desktop/laptop sales figures, it’s pretty apparent that desktops are on the way out as sixpack machines. Me, you, and everyone we know are buying laptops instead of desktops. It’s already happening. In 2005, more laptops were sold than desktops in the United States for the first time. What do desktops have that are missing in laptops? Speedy video cards. You can’t play Oblivion on a 1000 dollar laptop, but you can on a 1000 dollar desktop.

Let’s recap. Desktop sales are dropping. Laptop sales are increasing. You can type up your papers on your shiny new PlayStation 3. Where does that leave Dell? It leaves Dell with lower margins across the line, and no desktop market. Where does it leave EA? Investing in Console games. Where does it leave you? Either paying an arm and a leg for a desktop PC you build yourself that consumes an ungodly amount of power (thank ATI and nVidia for that – and I’ll get into it later), or buying a PS3/360. I haven’t mentioned it yet, but don’t think for a second that Microsoft will sit on its laurels while Sony positions the PS3 as a desktop-replacement productivity powerhouse.

Now, I’m not trying to say that the desktop PC market is going to disappear all at once by summer 2007. I’m just pointing out that there is a trend forming here, and that it doesn’t bode well for the desktop market (or the PC Game).

There’s hope. It’s called ATIMD. And Intel. And nVidia come to think of it. These three entities do not want to see Consoles win this bloody war, and they’re going to do everything they can to keep in the game.

The Future of Modern Computing

You’ve no doubt hear that AMD recently acquired ATI. You may not have heard that Intel started hiring GPU (graphics processing unit) engineers, and that nVidia recently started developing a CPU. What we have here is a CPU company that now owns a GPU company, a CPU company developing GPUs, and a GPU company developing CPUs. The timeframe that matters here is 2008; that’s when this all comes together.

Desktops are dead, and the big reason is laptops. The little reason is consoles, but that’s not that important. So how do you keep the PC market alive? The only segment that still has life in the US: laptops.

Currently there are a couple of problems with gaming on laptops. The first is heat. In order to dissipate the heat that your CPU/GPU/RAM create, you have to have fans, and space. Space means big laptops, and big laptops aren’t so hot if you know what I mean. The other problem is power consumption. Currently, midrange and high-end laptops have a GPU, a CPU, and a Northbridge/Southbridge. Each one of these consumes power. With the AMD Athlon/Turion, no Northbridge is needed, so that’s an improvement. The future is going to be dealing with the CPU and GPU.

Over the course of the last year or so we’ve seen a strong push towards Dual-core processors by both Intel and AMD. Intel released their Core Duo processor for laptops in January of 2006, and tied it directly into their Centrino marketing to put two processors in every laptop. The important part is this: What if one of those cores was not a CPU, and instead was a GPU? Then we’d be eliminating a chip, dropping power consumption and heat dissipation, all the while increasing the usability of a laptop for things like games.

This means survival. That’s what AMD was thinking about when it bought ATI. That’s what Intel is thinking about when it hires experienced GPU engineers. That’s what nVidia is thinking about when it starts development on a CPU. These are all companies whose end would largely be spelt by the end of the PC Gaming market. If there’s not a new game that requires a faster CPU/GPU, why would people upgrade their computers? Furthermore, if laptops are the PC market, how do we keep the PC market alive without games for them? Where does my market go if no one buys my product?

The solution is not to make the games for the laptop; it’s to make the laptop for the games. Increase power without destroying battery life. You keep that laptop a laptop. I want a laptop that I can take to class during the day and plug into my monitor to play games on at night. Currently that does exist, but it’s big and nasty and I don’t want to pay three thousand dollars for a laptop that weighs 14 pounds. But by creating a CPU with powerful GPU elements built in, AMD and Intel can stay in the processor game and keep their margins safe from IBM. By developing a CPU/GPU, nVidia will stay alive, as opposing to falling by the wayside.

There’s even an in-between option: A CPU with minimal GPU elements that will run Windows Vista so that the real GPU in the laptop can turn off and save energy while on battery power. This is probably what we’ll see first, before the technology is in its later generations. But eventually, keep an eye out for a transition from buying a CPU and a video card to buying a CPU and a GPU for a second socket, or simply a CPU with a GPU integrated into it the processor die.

That’s the future. CPUs that have GPUs on board. It’s going to be a big fight. I like PCs more than Consoles, so you know which side I’m on.

Sunday, October 08, 2006

Why Quad-Core Processors Are a Waste of Money

…for at least the next 18 months.

Because they are. Is that a surprise? How important is it to you, personally, to encode three DIVX movies and play counterstrike at the same time? Extremely important? Congratulations. You just defined yourself as a virtually non-existent minority, freak. Let’s examine why for just a second.

Dual-Core Processors (and why they make sense, kind of)

You like writing papers. And listening to music. You might even like downloading and watching movies. Probably, you’ve got at least two programs open on your computer apart from the operating system: AIM and iTunes. Chances are you leave those running most of the time. Welcome to 2006, you don’t need a dual core processor.

You play games as well? Woah there, things just got a little more complicated. Do you make a habit of turning off torrents or Limewire or Morpheus or Bearshare or AIM or Word or Powerpoint or Outlook or Firefox when you play games? If you do, welcome to your savior: dual-core processors. For the first time ever, having two processor cores has a price tag that is reasonable for the average computer user. Before, to enjoy the benefits of Symmetric Multi Processing (SMP from here on out), you would have had to buy a dual-processor motherboard (expensive) and two processors that had SMP enabled (expensive x2). Now, largely due to competition and not consumer demand, you can get a single processor with two cores. So instead of needing all that extra expensive hardware, you can just get a normal system. It looks like Dell sells a dual-core Athlon64 X2 system (with a 19 inch LCD, video card, and a gig of ram) for about 700 bucks – a whole system that’s equivalent to a dual-processor computer for around the same price as a PS3 with a couple of controllers and no TV. If you build it yourself you can get an even better deal.

Now when you boot up Counterstrike to FRAG some NUBS, you can leave all your fancy piracy programs open. Counterstrike will run on one core and most of the rest of the programs (and your OS) will use the other one. Now, instead of getting 62 frames-per-second (FPS) while you’re downloading Zarathura, you can get 88.

What I’m getting at here is that dual core processors have practical uses. They can have an effect on your user experience. They CAN. That doesn’t mean that they WILL. Most people consider 35 FPS to be playable for a game. Once you hit 60, you’ve satisfied an even more vast majority of the minority that plays games on PCs. In my example, you increased your FPS by 16, from 62 to 88. Most people wouldn’t notice the difference there. My single-core computer doesn’t choke when I start up a game while I’m torrenting, never has. Dual-cores may have an effect, but for most people that effect is negligible.

It is, however, convenient. Game runs on one core, everything else on the other. That makes sense, right? But the difference is really not all that tangible at the moment. But in the future it will be.

There are a couple of other cases in which a difference really can be seen. Do you use CAD, ray-tracing, 3d rendering, or video transcoding applications? Dual core processors are for you. In fact, quad-core processors are for you too, as these are applications that are very easily multithreaded (for the meek: glossing over some details, multithreading basically splits a program into multiple threads, each of which can be run on a different processor/core). Don’t know what any of those things I mentioned are? Welcome to the rest of the human race. You don’t really need more than one core.

The other case is the case of “the future.” Right now most of the benefits of dual- and multi-core processors can not be seen. The same goes for the ability of modern processors to process in 64-bit chunks. And for virtualization. There are notable exceptions, but for the most part there are no end-user/consumer applications that take advantage of multiple processors or 64-bit. This is the application support. FOR HARDWARE TO BE USEFUL, THERE MUST BE AN APPLICATION BASE TO TAKE ADVANTAGE OF IT! Currently, that application base does not exist. But it will come; there’ll be pressure from many directions on the developers to start multithreading their applications. So, if you're upgrading and you don't plan to upgrade for a while, splurge for a dual-core. The software will catch up with your computer.

A Case for Dual-Core: Alan Wake

Let’s look at a good example. Remedy, the studio that developed the Max Payne franchise, is set to release a new game in the not-so-distant future. It’s called Alan Wake. At IDF Intel used Alan Wake as an example of a game that took advantage of its new quad-core Intel Quad processor. According to Remedy, the damn thing won’t even run without a dual-core processor. Now, I don’t think that’s how it will end up, but I think it’s reasonable to assume that unless you have a dual-core, you won’t be able to have all the eye-candy and advanced physics turned on.

There’s a reason Alan Wake takes advantage of more than one processor core: They designed from the very beginning to do so. One core was used for the graphics thread – preparing graphical information for processing by the GPU – another was used for game logic – AI and whatnot – a third for physics, and the last for anything else the computer needed to do. This is both a case and a caveat. The case is that Alan Wake is an example of a mainstream program that will take advantage of being able to run multiple threads simultaneously. The caveat is that they had to start working on it as a multi-threaded application, and that of those threads the most taxing one still only runs on one core. This means that most games that are coming out in the next 12-18 months probably won’t be all that well multithreaded. Some might be able to do what Alan Wake does, but even that is crude at this point. Of all those tasks, the really important one is the graphics information; the entire game waits on it to move forward. That’s still assigned to one core. That’s a problem, and it’s not going to be fixed this year, or next year. Probably not even in 2008.

Quad-Core Processors (and why you’re an idiot if you buy one)

So you need that extra power, eh? Those extra cores? That extra FPU power? My roommate just got a system with two dual-core processors in it. What’s it doing right now with all that power? It’s running Seti@home. Four times. It’s really doing a great job processing those work units. What else does it do? Not a whole lot. You do not need a dual-core processor to run games, read email, chat with your friends, or write papers. So don’t buy one. Period. Wait for two years, and then buy one. Then they might be useful, you know, when the software catches up with the hardware. We still don’t even have programs that really take advantage of SSE3 optimizations, and that was released with the Pentium 4 more than two years ago.

What’s the moral of the story? If you must be on the cutting edge, get a dual core. You might even get a chance to take advantage of it. If you’re happy with what you’ve got, sit tight and watch the goofy masses spend their thousands on power that they couldn’t use if they tried.

Friday, September 29, 2006

One more episode of “let’s play the console game”

So yesterday HP announced that it will be acquiring VooDoo PC. It seems like someone else realized the Apple-and-Microsoft-and-Sony-taking-over-the-PC-market problem. Now that Dell and HP both have true footholds in the high-performance (read: high-margin), we can step back and take a look at the gaming (read: high-margin) industry as a whole. There are a couple of trends that we can see here.

Increase in cost (and amount required) of DDR2 RAM

While the cost of RAM has gone down over the last 18 months, it has actually gone up in the last two. And it will go up again from December until February and will continue to be steady probably until around May of 2007. Why is this? Well, there are several reasons. Two of them relate to Windows Vista and the other relates to Intel. First, estimates are that to play late-current and next-gen games in Windows Vista you will need to have more than 1gb of RAM. That means that we’re going to see a huge increase in demand for 2x1gb kits of RAM, which will increase the cost of these still-not-mainstream products. All computers sold (except for ones with socket 754 processors, which is a trivial number of computers) use Dual-Channel RAM. Same supply plus increased demand means increased prices for you and me. This will really ramp up in November and continue to pick up speed until around February, when supply should increase enough to level out prices.

So what we have here is an increase from the average 512MB stick to 1GB stick. Thanks to Intel, we have an increase in base speed from 533 to 667 and 800mhz. What I’m going to say next may not seem important, but it really is. The Core 2 Duo is helping to drive this increase in cost. Why? An increase in the required frequency of the RAM due to an increase in front-side bus (FSB) speed. Pentium 4s (and Pentium Ds and Xeons) had FSBs of 800mhz, which means that they sit pretty with 400 or 533mhz DDR2. Athlon 64s and X2s can use 533, 667, or 800mhz DDR2 – with not a particularly significant increase in performance from 667 to 800mhz because of the on-board memory controller. Core 2 Duos? They use 800 or 1066mhz DDR2. You can put 667mhz RAM in there, or even 533mhz, but you’ll see a big difference. You want to overclock? Maybe use some faster memory? That sucks, only 10% of produced chips at the moment can hit 800mhz with reasonable CAS latency. Want 1.1ghz chips? Think .5%.

Basically, Intel artificially increased demand for faster memory. They increased demand for faster memory that isn’t there. So prices increase. And when windows Vista starts pulling its weight and we start seeing consumer mid-range machines shipping with 2gb of RAM instead of 1gb, the prices will increase again. In the six months from August to February, Microsoft doubled the RAM-per-computer requirement, and Intel increased the base frequency requirement. Hi-ho.

Decrease in cost of video cards ($ per pixel)

I have an XFX 7900GT (24 pipelines, 550/1.5ghz Core/RAM freq) in my computer. It cost me 300 dollars in March of this year. It now costs around 240 for a good deal. But this is a high-end card; let’s take a look at the midrange. For around 109 dollars (I saw a deal for this yesterday), you can get a 7600GT (12 pipelines, 520mhz/1.4ghz). This card is a little less than half as fast as my card, and it costs 50% of what you'd pay for my card now.

You can get X1900XTs for 250 dollars. These are faster than 7900GTs, and they cost virtually the same. At the X1900XT's debut, they cost almost $400. Welcome to faster videocards for less money.

And this is only going to get better. In about a month, nVidia is going to release their new card, code-named G80. It will have 768MB of RAM and a 384-bit memory bus. In February, AMD/ATI is going to release their new chip, code-named R600. It will have 64 unified pipelines, and probably be the fastest chip to date. What does the introduction of a new technology do to the existing technology? Makes it cheaper.

Increase in cost of next-gen Consoles and decrease in cost of high-efficiency computing

Microsoft’s Xbox 360 costs $299 or $399. The HD-DVD add-on is going to cost $199. Sony’s PS3 is going to cost $499 or $599, with a possible pre-release price drop to around 400/500 from 500/600. It comes with a BD-ROM. Nintendo’s Wii will cost $249.

Games for the PS3 are going to cost, on average, between 50 and 60 bucks.

So you want to buy a PS3, two games, and another controller? You’re looking at around 750 dollars.

At the same time, Intel released their Core2 processors. These are fast, run cool, and aren’t that expensive at all. What does that mean? $ per arbitrary performance per watt unit just went way down. The Athlon X2 had the crown before that, and AMD just cut all of their prices by an incredible amount. What does this mean for you? Let's look at how much it costs for you or me to build a small-form-factor PC now:

Barebones (Case, PSU, Mobo): 200
Processor: 180
RAM: 200
Hard Drive: 100
Video Card: 200
Optical Drive (DVD-ROM): 30

That’s for a Core2, 2gb RAM, 7900. In a Small Form Factor case. Total cost? Around 900 bucks. You better bet that Dell and HP can build it for cheaper than that. Oh yeah, and they both just acquired the two biggest players in the performance market. And Apple just announced a media center product.

What happens when Apple does something? Everyone follows. Dell just hired 500 new engineers last week. Coincidence?

So what does this all mean?

It means that Intel (and its livelihood) has caught on to IBM. IBM in the Wii. IBM in the 360. IBM in the PS3. IBM absolutely owning the American living room. Right now consoles comprise the vast majority of the Video Game market, and now they’re riding that into the media center segment with the 360 and the PS3. Dell and HP are scared because, well, what happens when you can type up a paper on your PS3? Their lucrative desktop market is going to disappear forever. They don’t want that to happen so they’re going to start changing with the times. First they’ll release some cheaper game-oriented small-form-factor PCs through Alienware and VooDoo, and later it will hit the mainstream. Apple already has the Mini and the upcoming iTV. No one is taking this onslaught lightly.

There’s a good thing in here though. Gaming is becoming a mainstream concern for HP and Dell, which means that it’s a mainstream market. If Dell and HP push PCs for gaming, you can bet that we’ll see an increase in PC Game sales. Make it as easy to use and it will be even bigger. That means more interesting titles for the PC.

2007 is going to be an interesting year.