It was inevitable, was it not? T-Mobile’s G1 lasted an entire week as the T-Mobile G1; now, it’s really anyone’s G1. Thanks to the kids over at Unlock T-Mobile G1, any owner with a few spare moments and $22.99 can open their handset up for use on AT&T or any other GSM network across the globe. Reportedly, prospective unlockers simply hand over the aforementioned cash and their IMEI code (scary, we know), and in return they receive an eight-digit unlock code that frees it from the bonds of T-Mobile. Initial tests have shown that calling and texting work just fine on non-native networks, but the inability to even login to Gmail (and thus, the Android Market, etc.) puts a real damper on things. No worries — we’re sure those minor hindrances will be worked out in short order. A video full of proof is waiting just beyond the break.
[Via Android Community]
We’ve been receiving reports from several gamers — and the Lionhead forums are positively buzzing — about a game breaking glitch in Fable II. The glitch occurs during the quest called “Monk’s Quest,” in which players are tasked with speaking to the Abbot of the Temple of Light in Oakfield. Apparently, if players run into the temple, begin the conversation with the Abbot, and then leave the region before the conversation is finished, they are be unable to resume the quest, thus preventing them from completing the main story. Lionhead is currently working on the issue and advises all players to make sure to finish the conversation and cut scene before leaving the area. At the moment there is no fix for those who have already encountered the glitch.
Another glitch discovered in the game causes those who take one of their heroes into co-op with another player starting a new game. Apparently, when leaving childhood, all experience and gold will be wiped from the co-op character. Lionhead hopes to address this glitch in a title update, but advises players to only play co-op after the childhood stage has been completed.
[Thanks to everyone that sent this in]
John Markoff at the New York Times has updated his article on a potential Apple netbook—following Steve Jobs’ comments—with an interesting piece of news that reminds me of the first days of the JesusPhone, when an unidentified Apple device was detected for the first time in the traffic logs of some web sites. Markoff even provides vague specifics about this potential MacBook nano/MacBook touch/iPhone slate which was spotted in the logs of an unnamed “search engine company”:
UPDATED: That would seem to confirm findings that a search engine company shared with me on condition that I not reveal its name: The company spotted Web visits from an unannounced Apple product with a display somewhere between an iPhone and a MacBook. Is it the iPhone 3.0 or the NetMac 1.0?
Like with the original iPhone—which was spotted online in web traffic blogs—I won’t be surprised if this was real. Other Apple computers were detected online first as well, although some of them—like multiprocessor Macs running SETI or other distributed computing tasks—were never released. Unlike Markoff, however, I believe that Steve was completely honest when he said “we don’t know how to build a sub-$500 computer that is not a piece of junk”, arguing that the company mission was to give more at the same price points, not less features for less money.
So out of pure instinct, I think we can rule out a MacBook nano netbook. Instead, if this is indeed a new unannounced Apple product, here in Gizmodo we are thinking about an iPhone HD with an updated 800 x 480 pixel display, probably coming in 2009. That resolution is something between the iPhone’s 480 x 320 pixels and MacBook’s 1280 x 800 pixels, which is completely reasonable: Other phones—like the HTC Touch HD—already have these ultra-sharp screens.
In addition to that, as Jobs pointed out in their financial conference call yesterday, they already have a strong entry in the small computing market with the iPhone. It is only logical for Apple—and probably less risky and cheaper—to keep the progress of the iPhone, upgrading the screen for one with a higher dot per inch count in the next model (but of course, I will always keep dreaming about the MacBook touch). [NYT]
Update: Some people argue that it may be a hackintoshed netbook, a computer running a modified version of Mac OS X. This may be the case, but I’m sure the “unnamed search company”—which won’t say the name of the Apple device—has plenty of hackintosh netbooks in their logs. On top of that, all hackintosh computers identify themselves as a Mac Pro, independently of their hardware.
Nvidia dropped by today to demo some of the awesome things that the GeForce 9400M in the new MacBooks can do that Intel’s integrated graphics just can’t touch, and to discuss a few technical points. Besides confirming that you’ll see it in other notebooks soon, they definitively answered some lingering questions about the chip’s capabilities: It can support up to 8GB of RAM. It can do on-the-fly GPU switching. And it can work together with the MacBook Pro’s discrete 9600M GT. But it doesn’t do any of those things. Yet.
Since the hardware is capable of all of these things, it means that they can all be enabled by a software/firmware/driver update. Whether or not that happens is entirely up to Apple. While you can argue that Hybrid SLI—using both GPUs at once—has a limited, balls-to-the-wall utility, being able to switch between the integrated 9400M and discrete 9600M GT on the fly without logging out would obviously be enormously easier than the current setup, and allow for some more creative automatic energy preferences—discrete when plugged in, integrated on battery. Hell, you can do it in Windows on some machines.
But since it’s Apple it’s also entirely possible we’ll never see any of this to come to pass—GPU-accelerated video decoding has totally been possible with the 8600M GT in the previous-gen MacBook Pros, and well, you know where that stands. [Apple & Nvidia Coverage@Giz]
Now that NVIDIA’s GeForce 9400M has made its debut in Apple’s new MacBooks, Technical Marketing Director Nick Stam says that five major notebook vendors are planning to ship systems with the chipset — though we don’t know if that includes Apple or not. Stam expects NVIDIA will carve out 30 percent of the integrated graphics market for itself, partly by improving other experiences besides games — Google Earth, photo editing, day-to-day video encoding, and other activities performed by people who use keys besides W, A, S, and D. Frankly, we’re just thankful we’ve evolved past the days when we needed a 19-inch monster to perform high-impact 3D tasks without sacrificing to the sinister gods of screen tearing.
Optical lithography is the secret sauce in the fabbing technology that makes the chips inside your computer, and a clever bunch at the University of California, Berkeley have worked out a new adaptation of the tech to produce chips that could be ten times more detailed. It basically combines a hard-disk-alike spinning platter and scanning head with a metal lens to focus UV light onto smaller spots: by rotating a chemically treated silicon wafer beneath the head, you can achieve far more precise chips than using a photo mask.
The metal lens is the key: it’s in fact a plasmonic lens, that achieves a smaller “focussed” spot of UV light than is possible with the diffraction limits of normal optics. The photoresist surface of the chip needs to be very close beneath the lens to work, hence the choice of the “flying head” arm—much like that in a conventional hard drive. This keeps the lens around 20nm above the silicon wafer, using aerodynamic forces between the spinning wafer and the head.
Why do we care about this? Conventional photolithography can achieve chip details down to a size of about 35nm, but with this technique the size could shrink to as small as 5nm. And it’s fast, and doesn’t require million-dollar lithography photomasks. Plus as well as denser, more powerful computer chips the technique could even be modified to create optical data storage systems with fantastic amounts of capacity. Clever stuff, and might keep Moore’s law ticking over for a while yet.
The other day we wrote a short article about a Navteq vehicle on the streets of Finland and asked them why/how they were using the cameras on top of the vehicle. They were nice enough to contact us with all this information and bunch of pics:
NAVTEQ has over 1,000 specially trained geographic analysts driving the roads updating and verifying map data. Each analyst is based locally and manages his or her own geographic area handling the entire process from start to finish. They travel in specially equipped vehicles with a GPS receiver accurate to one metre attached to the roof – this is connected to an on-board computer using special software to help them check the accuracy of current maps or create new ones.
Developed for the job, the software enables the field teams to download sections of the live database to a laptop for updating on the road. Changes to the maps can be made in several different ways: for standard features – such as one way streets – there is a range of icons which can be dropped straight on to the image. Pen tablets for hand written annotations have also been incorporated into the hardware. Voice notes are particularly useful in complicated situations. A sound file describes what can be seen on the road and an icon drops on to the screen showing the exact position using DGPS.
In addition multi-view video cameras are used as a back up record and for detailed follow-up analysis. The six cameras capture images every five metres and provide a 270° view of the road. Images are continuously sent to a server in the boot of the car and can be viewed an analysed in real time on a screen inside the car. Data is transferred via removable hard drives. This film can be reviewed at any time – even years later – to check specific details.
Thanks Navteq and Sue.