Showing posts with label Latest News. Show all posts
Showing posts with label Latest News. Show all posts

Tuesday, 15 April 2014

The internet thinks Facebook just killed the Oculus Rift

As announcements go, this one hit everybody way out of left field. From the halls of GTC to the echoing environs of Reddit, when Facebook excitedly announced that it had purchased Oculus VR — the manufacturers behind the much-desired Oculus Rift — the collective internet was dazzled with a brief moment of total WTF. A few hopefuls tentatively theorized that it might have been an early April Fool’s joke.
When it became clear that it wasn’t, the collective internet generally lost its mind. Reddit’s comments were… well, Redditish, but they weren’t alone. Notch promptly cancelled his plans to bring Minecraft to the Rift. The comments aimed at Palmer Luckey have been absolutely vitriolic. And among all the rage, some very genuine concerns and valuable perceptions have just been aired. (Read: Oculus Rift DK2 goes on sale for $350, features low-latency 1080p displays, more polished appearance.)
First, there’s this: If Mark Zuckerberg labored under the illusion that his company was trusted or seen, in any way, as having its finger on the future of gaming, those illusions should be shattered. Those of us who have been gaming since the 80286 was a hot ticket have generally watched the growth of Flash-based Facebook games with a mixture of skepticism and dismissal. Companies like Zynga may have gotten rich off Facebook engagement, but the kinds of games on Facebook are exactly what hardcore gamers and the Rift’s target audience don’t want.
Second, there’s the fact that many of us resent — deeply — having been turned into commoditized products. People may use Facebook, but that doesn’t automatically mean they like it. Zuckerberg has built a reputation for ignoring privacy, changing features on a whim, and relentlessly searching for more aspects of users’ lives that he can crunch into monetized kibble.
Oculus Rift DK2: Lower persistence displays
The Oculus Rift DK2 will sport low-persistence displays, which will reduce the nausea-inducing motion blur produced by fast-paced games
The Snowden leaks and blowback over the always-on Kinect 2.0 should have been a sign to Zuckerberg that his company’s intrusion into the living room via 3D headsets isn’t welcome. There is no way Facebook’s entry into this space would be taken as anything but a cynical attempt to grab more user data, because that’s the reputation Facebook has built for itself. Meanwhile, Zuck’s utterly tone-deaf monologue about buying Oculus Rift because it was the future of social networking couldn’t have sounded worse to people who bought into Rift because it was the future of gaming.
“Oculus has the chance to create the most social platform ever,” Zuckerberg said, “and change the way we work, play and communicate.”
Newsflash, Zucky. Nobody bought a Rift because they want to be part of your social network. Nobody. And so, when you decide to hype your purchase by talking about features that literally nobody wants or paid for, it’s not surprising that people get a little cranky about the whole thing. The solution to this is to reaffirm your fundamental commitment to the original mission the Oculus Rift set out to achieve, talk about your plans for getting that project off the ground, emphasize that no, you won’t be using the Rift to tie people to Facebook, push Facebook, integrate Facebook, or attempting, in any way, to make anyone use Facebook.

The broader context

HALbox Kinect
Consumers are generally pretty wary of having some kind of always-on, corporately-controlled gadget in the living room.
I think the explosion of fury over Oculus is actually more interesting than just some angry nerds because it reveals how deep the distrust goes between the corporations that monetize data and their customer bases. We live in an age when research has proven that most “anonymous” data isn’t anonymous at all. We’re tracked when we step outside, we’re tracked online. Microsoft’s Kinect plans for the original Xbox One raised serious privacy issues in the wake of the Snowden revelations precisely because it made people ask if Microsoft was even in control of its own technology. When the NSA is willing to hack private data links between Google and Yahoo servers, there’s no guarantee that Facebook’s data will stay private, no matter what the company says.
Pushing John Carmack to step up and make some comments about the state of the Oculus Rift would help, because Carmack is a voice that hardcore gamers trust, but I don’t think anyone is going to trust this technology in Zuckerberg’s hands, no matter what he says. Facebook is a company with the motto “Move fast and break things.” It has a history of dictating changes to its users and customers. It doesn’t have a stellar reputation for feedback or strong user engagement, unless “We pretend to listen, then do it anyway” actually counts as a feedback strategy.
It may not be the wrong company to launch a peripheral like the Rift, but it sure as hell looks like it. If the company continues to make grand promises of social engagement as opposed to focusing on the game-centric strategy that the Oculus’ existing sponsors actually want, the result could be the fastest plunge from hero to unwanted garbage in product history.

Facebook for your face: I have seen the future, and it’s awesome (sorry, gamers)



Like you and everyone else on the internet, I was dumbstruck when Facebook’s Zuckerberg announced that his company would be acquiring Oculus VR, the makers of the Oculus Rift virtual reality headset, for $2 billion. The two companies are so utterly different, with paths so magnificently divergent, that it’s hard to see the acquisition as anything more than the random whim of a CEO who was playing a billion-dollar game of Acquisitory Darts. Or perhaps Zuckerberg just enjoyed playing Doom as a child and thought, what’s the point in being one of the world’s richest people if you can’t acquire your childhood idol, John Carmack?

John Carmack, Valve Eye
One wonders what Carmack’s long-term plans are, after being acquired by Facebook
Be patientFirst, it’s important to remember that, in the short term, the Oculus Rift is unlikely to be negatively affected by this acquisition. According to Oculus VR co-founder Palmer Luckey, thanks to Facebook’s additional resources, the Oculus Rift will come to market “with fewer compromises even faster than we anticipated.” Luckey also says there won’t be any weird Facebook tie-ins; if you want to use the Rift as a gaming headset, that option will still be available.Longer-term, of course, the picture is a little murkier. Zuckerberg’s post explaining the acquisition makes it clear that he’s more interested in the non-gaming applications of virtual reality. “After games, we’re going to make Oculus a platform for many other experiences… This is really a new communication platform… Imagine sharing not just moments with your friends online, but entire experiences and adventures.”Facebook for your face: An Oatmeal comic
Facebook for your face: An Oatmeal comic that successfully predicted Facebook’s acquisition some months ago.
Who wouldn't want to walk around Second Life with a VR headset?
Second Second LifeUltimately, I think Facebook’s acquisition of Oculus VR is a very speculative bet on the future. Facebook knows that it rules the web right now, but things can change very, very quickly. Facebook showed great savviness when it caught the very rapid consumer shift to smartphones — and now it’s trying to work out what the Next Big Thing will be. Instagram, WhatsApp, Oculus VR — these acquisitions all make sense, in that they could be disruptive to Facebook’s position as the world’s most important communications platform.While you might just see the Oculus Rift as an interesting gaming peripheral, it might not always be so. In general, new technologies are adopted by the military, gaming, and sex industries first — and then eventually, as the tech becomes cheaper and more polished, they percolate down to the mass market. Right now, it’s hard to imagine your mom wearing an Oculus Rift — but in five or 10 years, if virtual reality finally comes to fruition, then such a scenario becomes a whole lot more likely.
Who wouldn’t want to walk around Second Life with a VR headset?
For me, it’s easy to imagine a future Facebook where, instead of sitting in front of your PC dumbly clicking through pages and photos with your mouse, you sit back on the sofa, don your Oculus Rift, and walk around your friends’ virtual reality homes. As you walk around the virtual space, your Liked movies would be under the TV, your Liked music would be on the hi-fi (which is linked to Spotify), and your Shared/Liked links would be spread out on the virtual coffee table. To look through someone’s photos, you might pick up a virtual photo album. I’m sure third parties, such as Zynga and King, would have a ball developing virtual reality versions of FarmVille and Candy Crush Saga. Visiting fan pages would be pretty awesome, too — perhaps Coca-Cola’s Facebook page would be full of VR polar bears and and happy Santa Clauses, and you’d be able to hang out with the VR versions of your favorite artists and celebrities too, of course.And then, of course, there are all the other benefits of advanced virtual reality — use cases that have been bandied around since the first VR setups back in the ’80s. Remote learning, virtual reality Skype calls, face-to-face doctor consultations from the comfort of your home — really, the possible applications for an advanced virtual reality system are endless and very exciting.But of course, with Facebook’s involvement, those applications won’t only be endless and exciting — they’ll make you fear for the future of society as well. As I’ve written about extensively in the past, both Facebook and Google are very much in the business of accumulating vast amounts of data, and then monetizing it. Just last week, I wrote about Facebook’s facial recognition algorithm reaching human levels of accuracy. For now, Facebook and Google are mostly limited to tracking your behavior on the web — but with the advent of wearable computing, such as Glass and Oculus Rift, your real-world behavior can also be tracked.And so we finally reach the crux of the Facebook/Oculus story: The dichotomy of awesome, increasingly powerful wearable tech. On the one hand, it grants us with amazingly useful functionality and ubiquitous connectivity that really does change lives. On the other hand, it warmly invites corporate entities into our private lives. I am very, very excited about the future of VR, now that Facebook has signed on — but at the same time, I’m incredibly nervous about how closely linked we are becoming to our corporate overlords.

Secrets of the PS4: Heavily modified Radeon, supercharged APU design



For months, there have been rumors that the PS4 wouldn’t just be more powerful than Microsoft’s upcoming Xbox — it would be capable of certain compute workloads that Redmond’s next-generation console wouldn’t be able to touch. In an interview last week, Sony lead hardware architect Mark Cerny shed some major light on what these capabilities look like. We’ve taken his comments and combined them with what we’ve learned from other sources to build a model of how the PS4 is likely organized, and what it can do.
First, we now know the PS4 is a single system on a chip (SoC) design. According to Cerny, all eight CPU cores, the GPU, and a number of other custom units are all on the same die. Typically, when we talk about SoC design, we distinguish between “on-die” and “on-package.”
  Components are on-package if they’re part of a finished processor but aren’t fabbed in a single unit. The Wii U, for example, has the CPU and GPU on-package, but not on-die. Building the entire PS4 in a monolithic die could cut costs long-term and improve performance, but is riskier in the short-term.

An overhauled GPU

According to Cerny, the GPU powering the PS4 is an ATI Radeon with “a large number of modifications.” From the GPU’s perspective, the large RAM pool doesn’t count as innovative. The PS4 has a unified pool of 8GB of RAM, but AMD’s Graphics Core Next GPU architecture (hereafter abbreviated GCN) already ships with 6GB of GDDR5 aboard workstation cards. The biggest change to the graphics processor is Sony’s modification to the command processor, described as follows:
The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands — the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that’s in the system.

That’s a fairly bold statement. Let’s look at the relevant portion of the HD 7970′s structure:
AMD GCN front-end
Here, you can see the Asynchronous Compute Engines and the GPU Command Processor. AMD has always said that it could add more Asynchronous Compute Engine blocks to this structure to facilitate a greater degree of parallelization, but I think Cerny mixed his apples and oranges here, possibly on purpose. First, he refers to specific hardware blocks, then segues into discussing queue depths. AMD released a different slide in its early GCN unveils that may shed some additional light on this topic.


AMD-ACE



Each ACE can fetch queue information from the Command Processor and can switch between asynchronous compute tasks depending on what’s coming next. GCN was designed with some support for out-of-order processing, and it sounds as though Sony has expanded the chip’s ability to monitor and schedule how tasks are executed. It’s entirely possible that Sony has added additional ACEs to GCN to support a greater amount of asynchronous computing capability, but simply stuffing the front of the chip with 61 additional ACEs wouldn’t magically make more execution resources available.


Now we turn our attention to the memory architecture. We know the PS4 uses a 256-bit memory bus and Cerny specifies 176GB of bandwidth. That works out to a GDDR5 clock speed of 1375MHz, which is comfortably within the current range of GDDR5 products already on the market. We’ve put together a set of what we consider to be the top three most likely structures, their strengths, and their weaknesses.

Option 1: A supercharged APU-style design

AMD has published a great deal of information on Llano and Trinity’s APU design. Llano and Trinity share a common structure that looks like this:
AMD APU diagram
In Llano and Trinity, the CPU-GPU communication path varies a great deal depending on which kind of data is being communicated. The solid line (Onion) is a lower-bandwidth bus (2x16B) that allows the GPU to snoop the CPU cache. The dotted lines are the Radeon Memory Bus (Garlic). This is a direct link between the GPU and the Unified North Bridge architecture, which contains the memory controller.




Why Game of Thrones is the most pirated TV show in the world

While hard and fast figures are tough to come by, it appears that the Game of Thrones season four premiere will become the most pirated TV episode of all time, racking up around one million downloads within 12 hours of the US airing, and a few million more in the days following. People pirate TV shows, movies, games, and music for a variety of reasons, but it mostly boils down to just two core reasons: a) Money (legally obtaining the files can be untenably expensive), and b) Ease of use (many legally obtained files are locked down with DRM, preventing you from truly owning the files and doing whatever you want with them). With these two factors in mind, let’s take a look at why people pirate Game of Thrones.

Legally watching Game of Thrones is expensive

Other than its inherent popularity, the main reason that Game of Thrones is pirated is because it can be very hard and expensive to legally watch in many countries around the world. While many shows eventually end up on Netflix, Hulu, or Amazon Prime, Game of Thrones is distributed by HBO — and the only way to watch an HBO show is with an HBO subscription, or to wait for the eventual DVD/Blu-ray release.



TorrentFreak analyzed the cost of an HBO subscription in the US, UK, Australia, Canada, and the Netherlands — and its findings are rather grim. In the US, HBO generally runs between $15 and $25 per month (so, ~$5 per episode) — but that’s before the cable/internet subscription (which puts it up to around $100 per month), and not including the fact that many subscriptions have a minimum contract of six or 12 months (so, the real range is around $40 to $120 per episode). In Australia, it’s even worse: The cheapest HBO package will run you $70 per month, with a minimum contract of six months. After some other added costs, it works out at roughly $50 for a single episode of Game of Thrones.


It’s a similar story in the UK, where it’ll cost you around $35 per month for an HBO subscription, with a minimum contract of 12 months, for a total of $42 per episode (remember, Game of Thrones only runs for 10 weeks). Canada gets HBO fairly cheaply ($18 per month), but you also need a digital or satellite TV subscription on top of that, putting the per-episode price up around US levels. In the Netherlands, the situation is actually not too bad: You can pick up Game of Thrones for around $9 per month, and some providers allow you to cancel your subscription at any time.


Obviously, if you’re in Australia, very few people in their right mind would pay $50 per episode, or $500 for the season — and so they download the show instead. (Rather unbelievably, the corporate director of Foxtel, the Australian provider of HBO, believes that it’s completely OK to charge $50 per episode.) It’s a similar story in the US, UK, and Canada, where you’re probably paying upwards of $40 per episode. Really, though, it’s the total subscription costs that you need to look at — you can just about justify $50 per episode, but being locked into a $500+ multi-month contract is insane.


As you can imagine, there are a lot of people in the world who really want to watch Game of Thrones but don’t have $500+ to spare.

But is it really that simple?

By this point, you’re probably aware that I’ve oversimplified things to tell a more dramatic story. When you pay ~$30 per month for that HBO package, you usually get a bunch of other channels as well. Yes, being locked into a $60-per-month internet/cable subscription bumps up the price — but you’d need that internet access to pirate the show.
This is where the second major cause of piracy — ease of use — enters the picture. Yes, you get lots of other TV shows as part the subscription bundle, but that’s just not how we consume media any more. We don’t want to buy a huge batch of things and then slowly work our way through it all, including the gristly bits that we don’t like — we live in an age where we choose exactly what we want to consume, and when we want to consume it. If HBO made individual episodes of Game of Thrones available to purchase worldwide for $5 immediately after it airs in the US, then piracy would drop dramatically. If those files also lacked DRM, allowing you to move them between your smartphone and home theater setup freely, piracy would probably become a non-issue overnight. 
The thing is, HBO knows this. Broadcasters around the world know this. The various rights holders (actors, writers, authors) know this. But still, HBO does nothing about it. Why? Because, as flawed as the system appears to be, it still works. HBO is still making tons of money ($5 billion in 2013), as are the various worldwide broadcasters and rights holders. Game of Thrones is massively popular, driving huge levels of piracy — but also large amounts of DVD and merchandise sales. Yes, broadcast TV could probably be done in a better way (see: Netflix and House of Cards), but the current status quo is obviously still working in HBO’s favor — so why rock the boat?















Google invents smart contact lens with built-in camera: Superhuman Terminator-like vision here we come

Google invents smart contact lens with built-in camera: Superhuman Terminator-like vision here we come


Google has invented a new smart contact lens with an integrated camera. The camera would be very small and sit near the edge of the contact lens so that it doesn’t obscure your vision. By virtue of being part of the contact lens, the camera would naturally follow your gaze, allowing for a huge range of awesome applications, from the basis of a bionic eye system for blind and visually impaired people, through to early warning systems (the camera spots a hazard before your brain does), facial recognition, and superhuman powers (telescopic and infrared/night vision). In related news, Google Glass is publicly available today in the US for one day only (still priced at $1500).

This new smart contact lens would have a tiny CMOS camera sensor just below your pupil, control circuit, and some method of receiving power wirelessly (more on that later). Because an imaging sensor, by definition, has to absorb light, it wouldn’t be transparent — but it could probably be color matched to your iris, so that your eyes don’t look too freaky.
  As you can probably imagine, there are some rather amazing applications if you have two cameras embedded in your contact lenses. You can’t do much in the way of image processing on the contact lens itself, but you could stream it to a nearby smartphone or head-mounted display (i.e. Google Glass), where a more powerful computer could perform all sorts of real-time magic. Google suggests that the cameras might warn you if there’s oncoming traffic at a crosswalk — useful for a normal-sighted person, but utterly invaluable for a blind or partially sighted person. For me, the more exciting possibilities include facial recognition (a la Terminator), and abilities that verge on the super or transhuman, such as being able to digitally zoom in and infrared thermal night vision. 


Beyond the medical- and consumer-oriented applications, you can also imagine the possibilities if police were equipped with contact lenses that could spot criminal faces in a crowd, or a bulge under a jacket that could be a concealed weapon. Oh, and the most exciting/deadly application of them all: Soldiers with smart contact lenses that alert them to incoming fire, provide infrared vision that can see through smoke, real-time range finding for more accurate sniping…

This invention, from the Google X skunkworks lab, comes in the form of a patent that was filed in 2012 and was recently published by the US PTO. Earlier this year, Google announced that it was working on a smart contact lens for diabetics that provides a real-time glucose level reading from your tears. As far as we can tell, there’s no timeline for real-world trials of either variety of contact lens — but we can tell you that the technology to create such devices is very nearly here. Way back in 2011, a smart contact lens with an LED display was trialed in the lab. 

Moving forward, there are some concerns about power delivery (there’s no space for a battery, of course, so it has to be beamed in wirelessly), and whether it’s wise to have a wireless device implanted in a rather sensitive organ, but I don’t think these will be game-breaking problems. For now, we’re talking about fairly chunky contact lenses that are best suited to laboratory testing — but it shouldn’t be more than a few years until real, comfortable smart contact lenses come to market.

 

© 2013. All rights resevered. Share on

Call of Duty Black Ops 15th Prestige Google+