Microsoft’s Mobile World Strategy at WinHEC

In his WinHEC keynote (at 07:00), Craig Mundie (Microsoft’s Cheif Research and Strategy Officer) talked about the fact that in ‘ermerging market’ countries, the main computer that millions do have, is a mobile phone.

He showed a video of a Windows Mobile phone playing audio and allowing illiterate individuals to seek services through icon choices, e.g. getting help dealing with a child’s medical symptoms.

Note that he talked about ’emerging market’ countries, rather than ‘developing’ countries, but I still don’t think it’s realistic to think that hoardes of people can afford a Windows Mobile phone but are still illiterate, so goverment or remote funding would likely be required.  In fact, he illustrated (with a pyramid) a view of the richest 1 billion people (that have computers), the 2 billion that has limited disposable incoming, and the 3 billion that do need government or agency sponsored programs, all under the umbrella of something called “Microsoft Unlimited Potential” (complete with local and slogan).

Craig’s keynote was quite dry and there wasn’t really anything too novel or far future-looking from Microsoft Research; in fact the first half of his speech was mostly about application and strategy.

While there was talk about medical assistance for other markets, this keynote really seemed like Microsoft airing it’s idea for how it can get to the 2 billion (who largely have mobile phones that are not Windows-based, but were having an interoperable Windows-based device could bring new activities) and the 3 billion where perhaps medical needs could justify getting a Windows-based phone in were there is still the opportunity to compete from a fresh start.

Remember, while this may sound like a cynical view, Microsoft is a publically traded company looking to increase stock price which often means growing its market reach, and Craig is the Strategy guy.  While it would have made for a cooler keynote, we aren’t ready for Windows SpaceCraft edition yet, especially when it’s proving so hard for Microsoft to penetrate into the automobile market (but I wish they would).


Network stacking your brain

Here we come Johnny Mnemonic

Ted Berger, at the University of Southern California in the US is experimenting with electronic chip brain cell replacements.

This is fascinating because his approach involves operating at what could be seen as analogous to a low level of the computer networking protocol stack, ignoring how it’s used higher up.

Amongst other thinks, the ISO layers of computer networking enables two things: I can abstract away lower levels (like a wireless medium or a physical cable) so that they all look like something that can carry IP data packets (with caveats); I can also on top of that generic platform, build other abstraction protocols like the TCP protocol (which provides for sessions with error correction) or go even futher and build web site protocols (that let your computer get this page) or Web services that allow computers to request information from each other.

Let’s say the brain talks IP packets. Think of this guy’s idea as saying that he can put in other hardware and make it look like something that talks IP packets too. He doesn’t care what the application protocols are – YOUR ACTUAL THOUGHTS AND MEMORIES – on top of this, or how the brain runs them.

He’s actually playing at a much lower level in the stack – more like the ethernet level – with the electrical signals that are occuring between brain cells. As Ted puts it: “I don’t need a grand theory of the mind to fix what is essentially a signal-processing problem,” he says. “A repairman doesn’t need to understand music to fix your broken CD player.”

He is talking about repair too. He means to develop something that can be used to replace damaged brain areas. Later this year, his super-star scientist team (some of them being involved in the early Internet networks) are testing it on rats by chemically disabling parts of the brain and then bypassing that part with his electronics to see if it works.

Many people have issues with this work. Ted gives the impression that he thinks they don’t understand this stack principal: they think it will not work because they say Ted doesn’t know how Brains fundamentally think and store memories (it seems no-one does yet); Ted is saying that it’s irrelevant because he’s just simulating the hardware that the ‘brain OS’ runs on.

If you have used Virtual PC or Remote Desktop, etc. you’ll get the idea that he is effectively emulating the hardware and has figured out the RDP protocol.

There are two big issues around emulation ‘characteristics’ that I can think of that could make replacable brain cells not work as expected. The first is that while the abstraction principle works in theory, in practice the facade can ‘crack’. For example, while you can send IP packages around your home computer network through a cable or wirelessly, the wireless method can exhibit characterisitics that are different to cable and those can have a knock on affect a higher level, e.g. interference may create signal behaviour causing a delay (or even data loss) that is significant to a higher layer (your web browser hangs waiting for a page and may not have retry code). To make this work in the brain will mean that the emulation must have the same characteristics, or at least characteristics that are within tolerance as far as the brain is concerned. The second is that even if the characteristics are within spec, the remaining deviation may actually still affect the character of the person. This could happen because the characteristics of the signal processing may contribute to the patterns and behaviour of the brain, so while a brain implant may fix something (or perhaps augment you), it could actually make you a different person – at an inconsequential level or avalanching to a different personality (as can happen with brain traumas).

I could go for a while discussing this, but there’s one other thing to get you thinking that I really must mention, and it’s somewhat philisophical.

A lot of brain surgery appears (on TV at least) to be done with the patient conscious (which I believe has something to do with it being the best way to monitor stability and the brain not feeling pain directly). Imagine that you had a surgery whereby pieces of your brain were replaced with such implants and assuming they had perfectly matching characteristics so that your behaviour is not altered. Now imagine that this process continues during the surgery until your whole brain is replaced… Remember that you are awake the whole time. Are you still you? Perhaps your brain reacts to the temporary bypasses. If you are still you, what happens if your new brain is totally removed and hooked up to bionic sensory apparatus (new eyes, ears, etc) – with a temporarly moment of sensory deprevation while you’re still conscious? You could even go for the human sensory equivalent of a keyboard, video & mouse switch. How about remote desktop from a robot in another room, across the world, or on the moon?

The major point, that gives this some extra credibility in my mind is that once a piece (or the whole of a brain) is replaced, we still wouldn’t know, and wouldn’t need to know, how the replacement is being used to do your thinking and memory storage. Even if we could monitor the activity, it is still too complicated to analyse, because there isn’t a computer powerful enough to data mine the whole thing (not even another brain perhaps), though maybe we could begin to understand the building block patterns.

They can only emulate less than a billionth of a brain’s cells on some small circuitry, so there’s a way to go yet before you can preserve your brain in a jar powered by a solar cell.