Tuesday, 15 April 2014

The Top 5 Action Movies Of 2013


My Top 5 Movies List + Trailers + Links + Downloads.
(Marvels Top 3 + Others)
Unique List:

5:Captain America: The Winter Soldier (2014)



After the cataclysmic events in New York with The Avengers, Steve Rogers, aka Captain America, living quietly in Washington, D.C. and trying to adjust to the modern world. But when a S.H.I.E.L.D. colleague comes under attack, Steve becomes embroiled in a web of intrigue that threatens to put the world at risk. Joining forces with the Black Widow, Captain America struggles to expose the ever-widening conspiracy while fighting off professional assassins sent to silence him at every turn. When the full scope of the villainous plot is revealed, Captain America and the Black Widow enlist the help of a new ally, the Falcon. However, they soon find themselves up against an unexpected and formidable enemy-the Winter Soldier.

For More Go Here:http://sh.st/wdiJO

4:The Avengers (2012) 

 

Marvel's The Avengers, or simply The Avengers, is a 2012 American superhero film based on the Marvel Comics superhero team of the same name, produced by Marvel Studios and distributed by Walt Disney Studios Motion Pictures.

For More Go HERE:http://sh.st/wdi8z

3:The Dark Knight (2012)

 

 The Dark Knight is a 2012 British-American superhero film directed, produced, and cowritten by Christopher Nolan.(One Of My favorite Movie)

 For More:http://sh.st/wdor3


2:Lone Survivor (2013) 


Based on The New York Times bestselling true story of heroism, courage and survival, Lone Survivor tells the incredible tale of four Navy SEALs on a covert mission to neutralize a high-level al-Qaeda operative who are ambushed by the enemy in the mountains of Afghanistan. Faced with....more
 

 

Captain Phillips (2013) 

 

  Superb Movie Man All Stuffed Action And Adventure It IS A TRUE DOCUMENTARY
The film focuses on the relationship between the Alabama's commanding officer, Captain Richard Phillips, and the Somali pirate captain, Muse,For More Click Here

Lone Survivor (2013)



Based on The New York Times bestselling true story of heroism, courage and survival, Lone Survivor tells the incredible tale of four Navy SEALs on a covert mission to neutralize a high-level al-Qaeda operative who are ambushed by the enemy in the mountains of Afghanistan. Faced with an impossible moral decision, the small band is isolated from help and surrounded by a much larger force of Taliban ready for war. As they confront unthinkable odds together, the four men find reserves of strength and resilience as they stay in the fight to the finish.
 1:TORRENT(Large):http://sh.st/wduyQ
2:torrent (low):http://sh.st/wdupf 
3:Online:http://sh.st/wdua5

Captain Phillips (2013)

 

The film focuses on the relationship between the Alabama's commanding officer, Captain Richard Phillips, and the Somali pirate captain, Muse, who takes him hostage. Phillips and Muse are set on an unstoppable collision course when Muse and his crew target Phillips' unarmed ship; in the ensuing standoff, 145 miles off the Somali coast, both men will find themselves at the mercy of forces beyond their control.

1:torrent:http://sh.st/wdyn6 
2:low: http://sh.st/wdyGZ
3:online: Not Yet.

The Dark Knight (2012)


The Dark Knight is a 2012 British-American superhero film directed, produced, and cowritten by Christopher Nolan.(One Of My favorite Movie)
1:(large) Torrent:http://sh.st/wdtG5
2:(low)Torrent: http://sh.st/wdt5C
3:online:http://sh.st/wdy0r 

The Avengers (2012)



X-Men: Days of Future Past is an upcoming 2014 American superhero film, based on the fictional X-Men characters appearing in Marvel Comics and on the 1981 Uncanny X-Men storyline "Days of Future Past" by Chris Claremont and John Byrne.

      Release date: May 23, 2014 (USA) 
      Director: Bryan Singer 
      Prequel: X-Men: The Last Stand 
      Sequel: X-Men: Apocalypse 
      Characters: Wolverine, Mystique, Magneto, Bolivar Trask, Storm, Others
 Watch The Trailer

 1:torrent (large):http://sh.st/wdrVX
2:torrent(low):http://sh.st/wdtoX
3:Online hd+flash:http://sh.st/wdtfl

Captain America: The Winter Soldier (2014)

 After the cataclysmic events in New York with The Avengers, Steve Rogers, aka Captain America, living quietly in Washington, D.C. and trying to adjust to the modern world. But when a S.H.I.E.L.D. colleague comes under attack, Steve becomes embroiled in a web of intrigue that threatens to put the world at risk. Joining forces with the Black Widow, Captain America struggles to expose the ever-widening conspiracy while fighting off professional assassins sent to silence him at every turn. When the full scope of the villainous plot is revealed, Captain America and the Black Widow enlist the help of a new ally, the Falcon. However, they soon find themselves up against an unexpected and formidable enemy-the Winter Soldier.

  • Release date: April 4, 2014 (USA)

  • Directors: Anthony Russo, Joe Russo

  • Running time: 136 minutes Cine21

  • Prequel: Captain America: The First Avenger

  • MPAA rating: PG-13
    • Genres: Action, Sci Fi, Adventure
    Trailer:-
    I am Torrent Links And Online Live Movie Links
    I Provide Two Types Of Links 1:HD and best (Large Files torrent)
                                                      2:HD (Low Size Torrent) 
     2: Online Links Contain Both.
    1:torrent Best:http://sh.st/wdeoe
    2:Size Is Less:http://sh.st/wdeh7

  • The internet thinks Facebook just killed the Oculus Rift

    As announcements go, this one hit everybody way out of left field. From the halls of GTC to the echoing environs of Reddit, when Facebook excitedly announced that it had purchased Oculus VR — the manufacturers behind the much-desired Oculus Rift — the collective internet was dazzled with a brief moment of total WTF. A few hopefuls tentatively theorized that it might have been an early April Fool’s joke.
    When it became clear that it wasn’t, the collective internet generally lost its mind. Reddit’s comments were… well, Redditish, but they weren’t alone. Notch promptly cancelled his plans to bring Minecraft to the Rift. The comments aimed at Palmer Luckey have been absolutely vitriolic. And among all the rage, some very genuine concerns and valuable perceptions have just been aired. (Read: Oculus Rift DK2 goes on sale for $350, features low-latency 1080p displays, more polished appearance.)
    First, there’s this: If Mark Zuckerberg labored under the illusion that his company was trusted or seen, in any way, as having its finger on the future of gaming, those illusions should be shattered. Those of us who have been gaming since the 80286 was a hot ticket have generally watched the growth of Flash-based Facebook games with a mixture of skepticism and dismissal. Companies like Zynga may have gotten rich off Facebook engagement, but the kinds of games on Facebook are exactly what hardcore gamers and the Rift’s target audience don’t want.
    Second, there’s the fact that many of us resent — deeply — having been turned into commoditized products. People may use Facebook, but that doesn’t automatically mean they like it. Zuckerberg has built a reputation for ignoring privacy, changing features on a whim, and relentlessly searching for more aspects of users’ lives that he can crunch into monetized kibble.
    Oculus Rift DK2: Lower persistence displays
    The Oculus Rift DK2 will sport low-persistence displays, which will reduce the nausea-inducing motion blur produced by fast-paced games
    The Snowden leaks and blowback over the always-on Kinect 2.0 should have been a sign to Zuckerberg that his company’s intrusion into the living room via 3D headsets isn’t welcome. There is no way Facebook’s entry into this space would be taken as anything but a cynical attempt to grab more user data, because that’s the reputation Facebook has built for itself. Meanwhile, Zuck’s utterly tone-deaf monologue about buying Oculus Rift because it was the future of social networking couldn’t have sounded worse to people who bought into Rift because it was the future of gaming.
    “Oculus has the chance to create the most social platform ever,” Zuckerberg said, “and change the way we work, play and communicate.”
    Newsflash, Zucky. Nobody bought a Rift because they want to be part of your social network. Nobody. And so, when you decide to hype your purchase by talking about features that literally nobody wants or paid for, it’s not surprising that people get a little cranky about the whole thing. The solution to this is to reaffirm your fundamental commitment to the original mission the Oculus Rift set out to achieve, talk about your plans for getting that project off the ground, emphasize that no, you won’t be using the Rift to tie people to Facebook, push Facebook, integrate Facebook, or attempting, in any way, to make anyone use Facebook.

    The broader context

    HALbox Kinect
    Consumers are generally pretty wary of having some kind of always-on, corporately-controlled gadget in the living room.
    I think the explosion of fury over Oculus is actually more interesting than just some angry nerds because it reveals how deep the distrust goes between the corporations that monetize data and their customer bases. We live in an age when research has proven that most “anonymous” data isn’t anonymous at all. We’re tracked when we step outside, we’re tracked online. Microsoft’s Kinect plans for the original Xbox One raised serious privacy issues in the wake of the Snowden revelations precisely because it made people ask if Microsoft was even in control of its own technology. When the NSA is willing to hack private data links between Google and Yahoo servers, there’s no guarantee that Facebook’s data will stay private, no matter what the company says.
    Pushing John Carmack to step up and make some comments about the state of the Oculus Rift would help, because Carmack is a voice that hardcore gamers trust, but I don’t think anyone is going to trust this technology in Zuckerberg’s hands, no matter what he says. Facebook is a company with the motto “Move fast and break things.” It has a history of dictating changes to its users and customers. It doesn’t have a stellar reputation for feedback or strong user engagement, unless “We pretend to listen, then do it anyway” actually counts as a feedback strategy.
    It may not be the wrong company to launch a peripheral like the Rift, but it sure as hell looks like it. If the company continues to make grand promises of social engagement as opposed to focusing on the game-centric strategy that the Oculus’ existing sponsors actually want, the result could be the fastest plunge from hero to unwanted garbage in product history.

    Facebook for your face: I have seen the future, and it’s awesome (sorry, gamers)



    Like you and everyone else on the internet, I was dumbstruck when Facebook’s Zuckerberg announced that his company would be acquiring Oculus VR, the makers of the Oculus Rift virtual reality headset, for $2 billion. The two companies are so utterly different, with paths so magnificently divergent, that it’s hard to see the acquisition as anything more than the random whim of a CEO who was playing a billion-dollar game of Acquisitory Darts. Or perhaps Zuckerberg just enjoyed playing Doom as a child and thought, what’s the point in being one of the world’s richest people if you can’t acquire your childhood idol, John Carmack?

    John Carmack, Valve Eye
    One wonders what Carmack’s long-term plans are, after being acquired by Facebook
    Be patientFirst, it’s important to remember that, in the short term, the Oculus Rift is unlikely to be negatively affected by this acquisition. According to Oculus VR co-founder Palmer Luckey, thanks to Facebook’s additional resources, the Oculus Rift will come to market “with fewer compromises even faster than we anticipated.” Luckey also says there won’t be any weird Facebook tie-ins; if you want to use the Rift as a gaming headset, that option will still be available.Longer-term, of course, the picture is a little murkier. Zuckerberg’s post explaining the acquisition makes it clear that he’s more interested in the non-gaming applications of virtual reality. “After games, we’re going to make Oculus a platform for many other experiences… This is really a new communication platform… Imagine sharing not just moments with your friends online, but entire experiences and adventures.”Facebook for your face: An Oatmeal comic
    Facebook for your face: An Oatmeal comic that successfully predicted Facebook’s acquisition some months ago.
    Who wouldn't want to walk around Second Life with a VR headset?
    Second Second LifeUltimately, I think Facebook’s acquisition of Oculus VR is a very speculative bet on the future. Facebook knows that it rules the web right now, but things can change very, very quickly. Facebook showed great savviness when it caught the very rapid consumer shift to smartphones — and now it’s trying to work out what the Next Big Thing will be. Instagram, WhatsApp, Oculus VR — these acquisitions all make sense, in that they could be disruptive to Facebook’s position as the world’s most important communications platform.While you might just see the Oculus Rift as an interesting gaming peripheral, it might not always be so. In general, new technologies are adopted by the military, gaming, and sex industries first — and then eventually, as the tech becomes cheaper and more polished, they percolate down to the mass market. Right now, it’s hard to imagine your mom wearing an Oculus Rift — but in five or 10 years, if virtual reality finally comes to fruition, then such a scenario becomes a whole lot more likely.
    Who wouldn’t want to walk around Second Life with a VR headset?
    For me, it’s easy to imagine a future Facebook where, instead of sitting in front of your PC dumbly clicking through pages and photos with your mouse, you sit back on the sofa, don your Oculus Rift, and walk around your friends’ virtual reality homes. As you walk around the virtual space, your Liked movies would be under the TV, your Liked music would be on the hi-fi (which is linked to Spotify), and your Shared/Liked links would be spread out on the virtual coffee table. To look through someone’s photos, you might pick up a virtual photo album. I’m sure third parties, such as Zynga and King, would have a ball developing virtual reality versions of FarmVille and Candy Crush Saga. Visiting fan pages would be pretty awesome, too — perhaps Coca-Cola’s Facebook page would be full of VR polar bears and and happy Santa Clauses, and you’d be able to hang out with the VR versions of your favorite artists and celebrities too, of course.And then, of course, there are all the other benefits of advanced virtual reality — use cases that have been bandied around since the first VR setups back in the ’80s. Remote learning, virtual reality Skype calls, face-to-face doctor consultations from the comfort of your home — really, the possible applications for an advanced virtual reality system are endless and very exciting.But of course, with Facebook’s involvement, those applications won’t only be endless and exciting — they’ll make you fear for the future of society as well. As I’ve written about extensively in the past, both Facebook and Google are very much in the business of accumulating vast amounts of data, and then monetizing it. Just last week, I wrote about Facebook’s facial recognition algorithm reaching human levels of accuracy. For now, Facebook and Google are mostly limited to tracking your behavior on the web — but with the advent of wearable computing, such as Glass and Oculus Rift, your real-world behavior can also be tracked.And so we finally reach the crux of the Facebook/Oculus story: The dichotomy of awesome, increasingly powerful wearable tech. On the one hand, it grants us with amazingly useful functionality and ubiquitous connectivity that really does change lives. On the other hand, it warmly invites corporate entities into our private lives. I am very, very excited about the future of VR, now that Facebook has signed on — but at the same time, I’m incredibly nervous about how closely linked we are becoming to our corporate overlords.

    Secrets of the PS4: Heavily modified Radeon, supercharged APU design



    For months, there have been rumors that the PS4 wouldn’t just be more powerful than Microsoft’s upcoming Xbox — it would be capable of certain compute workloads that Redmond’s next-generation console wouldn’t be able to touch. In an interview last week, Sony lead hardware architect Mark Cerny shed some major light on what these capabilities look like. We’ve taken his comments and combined them with what we’ve learned from other sources to build a model of how the PS4 is likely organized, and what it can do.
    First, we now know the PS4 is a single system on a chip (SoC) design. According to Cerny, all eight CPU cores, the GPU, and a number of other custom units are all on the same die. Typically, when we talk about SoC design, we distinguish between “on-die” and “on-package.”
      Components are on-package if they’re part of a finished processor but aren’t fabbed in a single unit. The Wii U, for example, has the CPU and GPU on-package, but not on-die. Building the entire PS4 in a monolithic die could cut costs long-term and improve performance, but is riskier in the short-term.

    An overhauled GPU

    According to Cerny, the GPU powering the PS4 is an ATI Radeon with “a large number of modifications.” From the GPU’s perspective, the large RAM pool doesn’t count as innovative. The PS4 has a unified pool of 8GB of RAM, but AMD’s Graphics Core Next GPU architecture (hereafter abbreviated GCN) already ships with 6GB of GDDR5 aboard workstation cards. The biggest change to the graphics processor is Sony’s modification to the command processor, described as follows:
    The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands — the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that’s in the system.

    That’s a fairly bold statement. Let’s look at the relevant portion of the HD 7970′s structure:
    AMD GCN front-end
    Here, you can see the Asynchronous Compute Engines and the GPU Command Processor. AMD has always said that it could add more Asynchronous Compute Engine blocks to this structure to facilitate a greater degree of parallelization, but I think Cerny mixed his apples and oranges here, possibly on purpose. First, he refers to specific hardware blocks, then segues into discussing queue depths. AMD released a different slide in its early GCN unveils that may shed some additional light on this topic.


    AMD-ACE



    Each ACE can fetch queue information from the Command Processor and can switch between asynchronous compute tasks depending on what’s coming next. GCN was designed with some support for out-of-order processing, and it sounds as though Sony has expanded the chip’s ability to monitor and schedule how tasks are executed. It’s entirely possible that Sony has added additional ACEs to GCN to support a greater amount of asynchronous computing capability, but simply stuffing the front of the chip with 61 additional ACEs wouldn’t magically make more execution resources available.


    Now we turn our attention to the memory architecture. We know the PS4 uses a 256-bit memory bus and Cerny specifies 176GB of bandwidth. That works out to a GDDR5 clock speed of 1375MHz, which is comfortably within the current range of GDDR5 products already on the market. We’ve put together a set of what we consider to be the top three most likely structures, their strengths, and their weaknesses.

    Option 1: A supercharged APU-style design

    AMD has published a great deal of information on Llano and Trinity’s APU design. Llano and Trinity share a common structure that looks like this:
    AMD APU diagram
    In Llano and Trinity, the CPU-GPU communication path varies a great deal depending on which kind of data is being communicated. The solid line (Onion) is a lower-bandwidth bus (2x16B) that allows the GPU to snoop the CPU cache. The dotted lines are the Radeon Memory Bus (Garlic). This is a direct link between the GPU and the Unified North Bridge architecture, which contains the memory controller.




    Why Game of Thrones is the most pirated TV show in the world

    While hard and fast figures are tough to come by, it appears that the Game of Thrones season four premiere will become the most pirated TV episode of all time, racking up around one million downloads within 12 hours of the US airing, and a few million more in the days following. People pirate TV shows, movies, games, and music for a variety of reasons, but it mostly boils down to just two core reasons: a) Money (legally obtaining the files can be untenably expensive), and b) Ease of use (many legally obtained files are locked down with DRM, preventing you from truly owning the files and doing whatever you want with them). With these two factors in mind, let’s take a look at why people pirate Game of Thrones.

    Legally watching Game of Thrones is expensive

    Other than its inherent popularity, the main reason that Game of Thrones is pirated is because it can be very hard and expensive to legally watch in many countries around the world. While many shows eventually end up on Netflix, Hulu, or Amazon Prime, Game of Thrones is distributed by HBO — and the only way to watch an HBO show is with an HBO subscription, or to wait for the eventual DVD/Blu-ray release.



    TorrentFreak analyzed the cost of an HBO subscription in the US, UK, Australia, Canada, and the Netherlands — and its findings are rather grim. In the US, HBO generally runs between $15 and $25 per month (so, ~$5 per episode) — but that’s before the cable/internet subscription (which puts it up to around $100 per month), and not including the fact that many subscriptions have a minimum contract of six or 12 months (so, the real range is around $40 to $120 per episode). In Australia, it’s even worse: The cheapest HBO package will run you $70 per month, with a minimum contract of six months. After some other added costs, it works out at roughly $50 for a single episode of Game of Thrones.


    It’s a similar story in the UK, where it’ll cost you around $35 per month for an HBO subscription, with a minimum contract of 12 months, for a total of $42 per episode (remember, Game of Thrones only runs for 10 weeks). Canada gets HBO fairly cheaply ($18 per month), but you also need a digital or satellite TV subscription on top of that, putting the per-episode price up around US levels. In the Netherlands, the situation is actually not too bad: You can pick up Game of Thrones for around $9 per month, and some providers allow you to cancel your subscription at any time.


    Obviously, if you’re in Australia, very few people in their right mind would pay $50 per episode, or $500 for the season — and so they download the show instead. (Rather unbelievably, the corporate director of Foxtel, the Australian provider of HBO, believes that it’s completely OK to charge $50 per episode.) It’s a similar story in the US, UK, and Canada, where you’re probably paying upwards of $40 per episode. Really, though, it’s the total subscription costs that you need to look at — you can just about justify $50 per episode, but being locked into a $500+ multi-month contract is insane.


    As you can imagine, there are a lot of people in the world who really want to watch Game of Thrones but don’t have $500+ to spare.

    But is it really that simple?

    By this point, you’re probably aware that I’ve oversimplified things to tell a more dramatic story. When you pay ~$30 per month for that HBO package, you usually get a bunch of other channels as well. Yes, being locked into a $60-per-month internet/cable subscription bumps up the price — but you’d need that internet access to pirate the show.
    This is where the second major cause of piracy — ease of use — enters the picture. Yes, you get lots of other TV shows as part the subscription bundle, but that’s just not how we consume media any more. We don’t want to buy a huge batch of things and then slowly work our way through it all, including the gristly bits that we don’t like — we live in an age where we choose exactly what we want to consume, and when we want to consume it. If HBO made individual episodes of Game of Thrones available to purchase worldwide for $5 immediately after it airs in the US, then piracy would drop dramatically. If those files also lacked DRM, allowing you to move them between your smartphone and home theater setup freely, piracy would probably become a non-issue overnight. 
    The thing is, HBO knows this. Broadcasters around the world know this. The various rights holders (actors, writers, authors) know this. But still, HBO does nothing about it. Why? Because, as flawed as the system appears to be, it still works. HBO is still making tons of money ($5 billion in 2013), as are the various worldwide broadcasters and rights holders. Game of Thrones is massively popular, driving huge levels of piracy — but also large amounts of DVD and merchandise sales. Yes, broadcast TV could probably be done in a better way (see: Netflix and House of Cards), but the current status quo is obviously still working in HBO’s favor — so why rock the boat?















    Google invents smart contact lens with built-in camera: Superhuman Terminator-like vision here we come

    Google invents smart contact lens with built-in camera: Superhuman Terminator-like vision here we come


    Google has invented a new smart contact lens with an integrated camera. The camera would be very small and sit near the edge of the contact lens so that it doesn’t obscure your vision. By virtue of being part of the contact lens, the camera would naturally follow your gaze, allowing for a huge range of awesome applications, from the basis of a bionic eye system for blind and visually impaired people, through to early warning systems (the camera spots a hazard before your brain does), facial recognition, and superhuman powers (telescopic and infrared/night vision). In related news, Google Glass is publicly available today in the US for one day only (still priced at $1500).

    This new smart contact lens would have a tiny CMOS camera sensor just below your pupil, control circuit, and some method of receiving power wirelessly (more on that later). Because an imaging sensor, by definition, has to absorb light, it wouldn’t be transparent — but it could probably be color matched to your iris, so that your eyes don’t look too freaky.
      As you can probably imagine, there are some rather amazing applications if you have two cameras embedded in your contact lenses. You can’t do much in the way of image processing on the contact lens itself, but you could stream it to a nearby smartphone or head-mounted display (i.e. Google Glass), where a more powerful computer could perform all sorts of real-time magic. Google suggests that the cameras might warn you if there’s oncoming traffic at a crosswalk — useful for a normal-sighted person, but utterly invaluable for a blind or partially sighted person. For me, the more exciting possibilities include facial recognition (a la Terminator), and abilities that verge on the super or transhuman, such as being able to digitally zoom in and infrared thermal night vision. 


    Beyond the medical- and consumer-oriented applications, you can also imagine the possibilities if police were equipped with contact lenses that could spot criminal faces in a crowd, or a bulge under a jacket that could be a concealed weapon. Oh, and the most exciting/deadly application of them all: Soldiers with smart contact lenses that alert them to incoming fire, provide infrared vision that can see through smoke, real-time range finding for more accurate sniping…

    This invention, from the Google X skunkworks lab, comes in the form of a patent that was filed in 2012 and was recently published by the US PTO. Earlier this year, Google announced that it was working on a smart contact lens for diabetics that provides a real-time glucose level reading from your tears. As far as we can tell, there’s no timeline for real-world trials of either variety of contact lens — but we can tell you that the technology to create such devices is very nearly here. Way back in 2011, a smart contact lens with an LED display was trialed in the lab. 

    Moving forward, there are some concerns about power delivery (there’s no space for a battery, of course, so it has to be beamed in wirelessly), and whether it’s wise to have a wireless device implanted in a rather sensitive organ, but I don’t think these will be game-breaking problems. For now, we’re talking about fairly chunky contact lenses that are best suited to laboratory testing — but it shouldn’t be more than a few years until real, comfortable smart contact lenses come to market.

    Monday, 14 April 2014

    Optical Camouflage

                                   Optical Camouflage(2002)

     Tachi Creates A see-Through Coat

    Thanks to japanese research, the twenty-first century soldier may soon be blending invisible into the background. the man behind optical camouflage Susumu Tachi is professor at tokyo University, where he works on the "Science And Technology of artficial reality". ironically, since he now works to make things invisible,tachi previously developed a robotic guide dog for blind
    the Optical Camouflage developed by tachi and his research team works by filming the background environment and projecting it into coat worn by the test subject.However,this is no average coat it is covered in thousands of tiny beads that reflect light back to its source, therefore rendering the coat invisible. this is the teory, but in reality the system is still far from perfect and in great need of cutting back on the volume of equipment required.

    Segway PT

                                 Segway PT

    The Segway PT is a two-wheeled, self-balancing, battery-powered electric vehicle invented by Dean Kamen. It is produced by Segway Inc. of New Hampshire, USA. The name Segway is a homophone of the word segue, meaning smooth transition. PT is an abbreviation for personal transporter.
    Computers and motors in the base of the device keep the Segway PT upright when powered on with balancing enabled. A user commands the Segway to go forward by shifting their weight forward on the platform, and backward by shifting their weight backward. The Segway detects, as it balances, the change in its center of mass, and first establishes and then maintains a corresponding speed, forward or backward. Gyroscopic sensors and fluid-based leveling sensors detect the weight shift. To turn, the user presses the handlebar to the left or the right.
    Segway PTs are driven by electric motors and can reach a speed of 12.5 miles per hour (20.1 km/h).

     Technology

     The dynamics of the Segway PT are similar to a classic control problem, the inverted pendulum. The Segway PT (PT is an initialism for personal transporter while the old suffix HT was an initialism for human transporter) has electric motors powered by Valence Technology phosphate-based lithium-ion batteries, which can be charged from household current. It balances with the help of dual computers that run proprietary software, two tilt sensors, and five gyroscopic sensors developed by BAE Systems' Advanced Technology Centre.The servo drive motors rotate the wheels forwards or backwards as needed for balance or propulsion. The rider controls forward and backward movement by leaning the Segway relative to the combined center of mass of the rider and Segway, by holding the control bar closer to or farther from their body. The Segway detects the change in the balance point, and adjusts the speed at which it is balancing the rider accordingly. On older models, steering is controlled by a twist grip on the left handlebar, which simply varies the speeds between the two motors, rotating the Segway PT (a decrease in the speed of the left wheel would turn the Segway PT to the left). Newer models enable the use of tilting the handle bar to steer.

    Stealth Technology

                                          Stealth Technology


    Stealth technology also termed LO technology (low observable technology) is a sub-discipline of military tactics and passive electronic countermeasures, which cover a range of techniques used with personnel, aircraft, ships, submarines, missiles and satellites to make them less visible (ideally invisible) to radar, infrared,sonar and other detection methods. It corresponds to camouflage for these parts of the electromagnetic spectrum.
    Development in the United States occurred in 1958, where earlier attempts in preventing radar tracking of its U-2 spy planes during the Cold War by the Soviet Union had been unsuccessful.Designers turned to develop a particular shape for planes that tended to reduce detection, by redirecting electromagnetic waves from radars.Radar-absorbent material was also tested and made to reduce or block radar signals that reflect off from the surface of planes. Such changes to shape and surface composition form stealth technology as currently used on the Northrop Grumman B-2 Spirit "Stealth Bomber".The concept of stealth is to operate or hide without giving enemy forces any indications as to the presence of friendly forces. This concept was first explored through camouflage by blending into the background visual clutter. As the potency of detection and interception technologies (radar, IRST, surface-to-air missiles etc.) have increased over time, so too has the extent to which the design and operation of military personnel and vehicles have been affected in response. Some military uniforms are treated with chemicals to reduce their infrared signature. A modern "stealth" vehicle is designed from the outset to have a chosen spectral signature. The degree of stealth embodied in a particular design is chosen according to the predicted threat capabilities.

    Internet Protocol

                                            Internet Protocol

     

    The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.
    IP, as the primary protocol in the Internet layer of the Internet protocol suite, has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.
    Historically, IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974; the other being the connection-oriented Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP.
    The first major version of IP, Internet Protocol Version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol Version 6 (IPv6).

    Laser Cooling Of Atoms

                                Laser Cooling Of Atoms

    Laser cooling refers to a number of techniques in which atomic and molecular samples are cooled down to near absolute zero through the interaction with one or more laser fields.

    The first example of laser cooling, and also still the most common method (so much so that it is still often referred to simply as 'laser cooling') is Doppler cooling. Other methods of laser cooling include:
    • Sisyphus cooling
    • Resolved sideband cooling
    • Velocity selective coherent population trapping (VSCPT)
    • Anti-Stokes inelastic light scattering (typically in the form of fluorescence or Raman scattering)
    • Cavity mediated cooling
    • Sympathetic cooling
    • Use of a Zeeman slower

    How it works

    A laser photon hits the atom and causes it to emit photons of a higher average energy than the one it absorbed from the laser. The energy difference comes from thermal excitations within the atoms, and this heat from the thermal excitation is converted into light which then leaves the atom as a photon. This can also be seen from the perspective of the law of conservation of momentum. When an atom is traveling towards a laser beam and a photon from the laser is absorbed by the atom, the momentum of the atom is reduced by the amount of momentum of the photon it absorbed.
    Δp/p = pphoton/mv = Δv/v
    Δv = pphoton/m
    Momentum of the photon is: p = E/c = h/λ
    Suppose you are floating on a hovercraft, moving with a significant velocity in one direction (due north, for example). Heavy metallic balls are being thrown at you from all four directions (front, back, left, and right), but you can only catch the balls that are coming from directly in front of you. If you were to catch one of these balls, you would slow down due to the conservation of momentum. Eventually, however, you must throw the ball away, but the direction in which you throw the ball away is completely random. Due to conservation of momentum, throwing the ball away will increase your velocity in the direction opposite the ball's. However, since the "throw-away" direction is random, this contribution to your velocity will vanish on average. Therefore your forward velocity will decrease (due to preferentially catching the balls in front) and eventually your movements will entirely be dictated by the recoil effect of catching and throwing the balls.ηcooling = Pcooling/Pelectric
    ηcooling = cooling efficiency
    Pcooling = cooling power in the active material
    Pelectric = input electric power to the pump light source
    h/λ = p = mv
    h = Planck's constant (h = 6.626∙〖10〗(-34) J∙s)
    λ = de Broglie's wavelength
    p = momentum of the atom
    m = mass of the atom
    v = velocity of the atom
    Example: λ = h/mv = λphoton/x
    x = number of photons needed to stop the momentum of an atom with mass m and at velocity v
    Na atom
    mNa = 3.818∙〖10〗(-26) kg/atom
    vNa ≈ 300meters/second
    λphoton = 600 nm
    λphoton/x = h/(mNa vNa ) ⟹ x = 10372
    Conclusion: A total of 10372 photons are needed to stop the momentum of one sodium atom with a velocity of about 300 m/s. Experiments in laser cooling have yielded a number of 10^7 photons to be emitted from a laser per second. This sodium atom could be stopped in space in just a matter of 1 millisecond.
     How To:


    Supercomputer

                            Supercomputer

         













    A supercomputer is a computer at the frontline of contemporary processing capacity – particularly speed of calculation which can happen at speeds of nanoseconds.
    Supercomputers were introduced in the 1960s, made initially and, for decades, primarily by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm. As of November 2013, China's Tianhe-2 supercomputer is the fastest in the world at 33.86 petaFLOPS, or 33.86 quadrillion floating point operations per second.
    Systems with massive numbers of processors generally take one of two paths: In one approach (e.g., in distributed computing), a large number of discrete computers (e.g., laptops) distributed across a network (e.g., the Internet) devote some or all of their time to solving a common problem; each individual computer (client) receives and completes many small tasks, reporting the results to a central server which integrates the task results from all the clients into the overall solution. In another approach, a large number of dedicated processors are placed in close proximity to each other (e.g. in a computer cluster); this saves considerable time moving data around and makes it possible for the processors to work together (rather than on separate tasks), for example in mesh and hypercube architectures.
    The use of multi-core processors combined with centralization is an emerging trend; one can think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc.) that both depends upon and contributes to the cloud.
    Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.

     

    Digital Audio Tape

                        Digital Audio Tape


    Digital Audio Tape (DAT or R-DAT) is a signal recording and playback medium developed by Sony and introduced in 1987. In appearance it is similar to a Compact Cassette, using 4 mm magnetic tape enclosed in a protective shell, but is roughly half the size at 73 mm × 54 mm × 10.5 mm. As the name suggests, the recording is digital rather than analog. DAT has the ability to record at higher, equal or lower sampling rates than a CD (48, 44.1 or 32 kHz sampling rate respectively) at 16 bits quantization. If a digital source is copied then the DAT will produce an exact clone, unlike other digital media such as Digital Compact Cassette or non-Hi-MD MiniDisc, both of which use a lossy data reduction system.
    Like most formats of videocassette, a DAT cassette may only be recorded and played in one direction, unlike an analog compact audio cassette.
    Although intended as a replacement for audio cassettes, the format was never widely adopted by consumers because of issues of expense and concerns from the music industry about unauthorized digital quality copies. The format saw moderate success in professional markets and as a computer storage medium. As Sony has ceased production of new recorders, it will become more difficult to play archived recordings in this format unless they are copied to other formats or hard drives.

    Uses of DAT

    Professional recording industry

    DAT was used professionally in the 1990s by the professional audio recording industry as part of an emerging all-digital production chain also including digital multi-track recorders and digital mixing consoles that was used to create a fully digital recording. In this configuration, it is possible for the audio to remain digital from the first AD converter after the mic preamp until it is in a CD player.

    Pre-recorded DAT

    In May 1988, Wire's album The Ideal Copy became the first popular music recording to be commercially released on DAT format. Several other albums from multiple record labels were also released as pre-recorded DAT tapes in the first few years of the format's existence, in small quantities as well.

    Amateur and home use

    DAT was envisaged by proponents as the successor format to analogue audio cassettes in the way that the compact disc was the successor to vinyl-based recordings. It sold well in Japan, where high-end consumer audio stores stocked DAT recorders and tapes into the 2010s and second-hand stores generally continued to offer a wide selection of mint condition machines. However, there and in other nations, the technology was never as commercially popular as CD or cassette. DAT recorders proved to be comparatively expensive and few commercial recordings were available. Globally, DAT remained popular, for a time, for making and trading recordings of live music (see bootleg recording), since available DAT recorders predated affordable CD recorders.

    Computer data storage medium

    The format was designed for audio use, but through the ISO Digital Data Storage standard was adopted for general data storage, storing from 1.3 to 80 GB on a 60 to 180 meter tape depending on the standard and compression. It is a sequential-access medium and is commonly used for backups. Due to the higher requirements for capacity and integrity in data backups, a computer-grade DAT was introduced, called DDS (Digital Data Storage). Although functionally similar to audio DATs, only a few DDS and DAT drives (in particular, those manufactured by Archive for SGI workstations) are capable of reading the audio data from a DAT cassette. SGI DDS4 drives no longer have audio support; SGI removed the feature due to "lack of demand".

    Electronic Paper

                                        Electronic Paper



    Electronic paper, e-paper and electronic ink are display technologies which are designed to mimic the appearance of ordinary ink on paper.Unlike conventional backlit flat panel displays which emit light, electronic paper displays reflect light like ordinary paper, theoretically making it more comfortable to read, and giving the surface a wider viewing angle compared to conventional displays. The contrast ratio in available displays as of 2008 might be described as similar to that of newspaper, though newly developed displays are slightly better. An ideal e-paper display can be read in direct sunlight without the image appearing to fade.
    Many electronic paper technologies can hold static text and images indefinitely without using electricity. Flexible electronic paper uses plastic substrates and plastic electronics for the display backplane. There is ongoing competition among manufacturers to provide full-color ability.
    Applications of electronic visual displays include electronic pricing labels in retail shops, and digital signage, time tables at bus stations, electronic billboards,mobile phone displays, and e-readers able to display digital versions of books and e-paper magazines.

    Disadvantages

    Electronic paper technologies have a very low refresh rate compared to other low-power display technologies, such as LCD. This prevents producers from implementing sophisticated interactive applications (using fast moving menus, mouse pointers or scrolling) like those which are possible on mobile devices. An example of this limit is that a document cannot be smoothly zoomed without either extreme blurring during the transition or a very slow zoom.
    An e-ink screen showing the "ghost" of a prior image
    Another limit is that a shadow of an image may be visible after refreshing parts of the screen. Such shadows are termed "ghost images", and the effect is termed "ghosting". This effect is reminiscent of screen burn-in but, unlike it, is solved after the screen is refreshed several times. Turning every pixel white, then black, then white, helps normalize the contrast of the pixels. This is why several devices with this technology "flash" the entire screen white and black when loading a new image.

    No company has yet successfully brought a full color display to market.
    Electronic paper is still a topic in the R&D community and remains under development for manufacturability, marketability, and reliability considerations.

    C programming language

                                   C programming language

     

    In computing, C (/ˈs/, as in the letter C) is a general-purpose programming language initially developed by Dennis Ritchie between 1969 and 1973 at AT&T Bell Labs. Like most imperative languages in the ALGOL tradition, C has facilities for structured programming and allows lexical variable scope and recursion, while a static type system prevents many unintended operations. Its design provides constructs that map efficiently to typical machine instructions, and therefore it has found lasting use in applications that had formerly been coded in assembly language, most notably system software like the Unix computer operating system.
    C is one of the most widely used programming languages of all time, and C compilers are available for the majority of available computer architectures and operating systems.
    Many later languages have borrowed directly or indirectly from C, including D, Go, Rust, Java, JavaScript, Limbo, LPC, C#, Objective-C, Perl, PHP, Python, Verilog (hardware description language), and Unix's C shell. These languages have drawn many of their control structures and other basic features from C. Most of them (with Python being the most dramatic exception) are also very syntactically similar to C in general, and they tend to combine the recognizable expression and statement syntax of C with underlying type systems, data models, and semantics that can be radically different. C++ and Objective-C started as compilers that generated C code; C++ is currently nearly a superset of C, while Objective-C is a strict superset of C.
    Before there was an official standard for C, many users and implementors relied on an informal specification contained in a book by Dennis Ritchie and Brian Kernighan; that version is generally referred to as "K&R" C. In 1989 the American National Standards Institute published a standard for C (generally called "ANSI C" or "C89"). The next year, the same specification was approved by the International Organization for Standardization as an international standard (generally called "C90"). ISO later released an extension to the internationalization support of the standard in 1995, and a revised standard (known as "C99") in 1999. The current version of the standard (now known as "C11") was approved in December 2011.

    Computerized Telephone Exchange

                        Computerized Telephone Exchange (1971)

    Sunday, 13 April 2014

    Food processor

                                            Food processor

     

    A food processor is a kitchen appliance used to facilitate repetitive tasks in the preparation of food. Today, the term almost always refers to an electric-motor-driven appliance, although there are some manual devices also referred to as "food processors".
    Food processors are similar to blenders in many ways. The primary difference is that food processors use interchangeable blades and disks (attachments) instead of a fixed blade. Also, their bowls are wider and shorter, a more appropriate shape for the solid or semi-solid foods usually worked in a food processor. Usually, little or no liquid is required in the operation of the food processor, unlike a blender, which requires some amount of liquid to move the particles around the blade.



    Functions

    Food processors normally have multiple functions, depending on the placement and type of attachment or blade. These functions normally consist of:
    • Slicing/chopping vegetables
    • Grinding items such as nuts, seeds (e.g. spices), meat, or dried fruit
    • Shredding or grating cheese or vegetables
    • Pureeing
    • Mixing and kneading doughs

    Request Form






    X-ray computed tomography

                                     X-ray computed tomography



    X-ray computed tomography (x-ray CT) is a technology that uses computer-processed x-rays to produce tomographic images (virtual 'slices') of specific areas of the scanned object, allowing the user to see what is inside it without cutting it open. Digital geometry processing is used to generate a three-dimensional image of the inside of an object from a large series of two-dimensional radiographic images taken around a single axis of rotation.Medical imaging is the most common application of x-ray CT. Its cross-sectional images are used for diagnostic and therapeutic purposes in various medical disciplines. The rest of this article discusses medical-imaging x-ray CT; industrial applications of x-ray CT are discussed at industrial computed tomography scanning.
    As x-ray CT is the most common form of CT in medicine and various other contexts, the term computed tomography alone (or CT) is often used to refer to x-ray CT, although other types exist (such as positron emission tomography [PET] and single-photon emission computed tomography [SPECT]). Older and less preferred terms that also refer to x-ray CT are computed axial tomography (CAT scan) and computer-assisted tomography. X-ray CT is a form of radiography, although the word "radiography" used alone usually refers, by wide convention, to non-tomographic radiography.

    CT produces a volume of data that can be manipulated in order to demonstrate various bodily structures based on their ability to block the x-ray beam. Although, historically, the images generated were in the axial or transverse plane, perpendicular to the long axis of the body, modern scanners allow this volume of data to be reformatted in various planes or even as volumetric (3D) representations of structures. Although most common in medicine, CT is also used in other fields, such as nondestructive materials testing. Another example is archaeological uses such as imaging the contents of sarcophagi. Individuals responsible for performing CT exams are called radiologic technologists or radiographers and are required to be licensed in most states of the USA.
    Usage of CT has increased dramatically over the last two decades in many countries.An estimated 72 million scans were performed in the United States in 2007. One study estimated that as many as 0.4% of current cancers in the United States are due to CTs performed in the past and that this may increase to as high as 1.5 to 2% with 2007 rates of CT usage; however, this estimate is disputed.,as there is not a scientific consensus about the existance of damage from low-levels of radiation. Kidney problems following intravenous contrast agents may also be a concern in some types of studies.
     See The Video Below:

    Genetically Modified Organisms

                                Genetically Modified Organisms

     

    A genetically modified organism (GMO) is an organism whose genetic material has been altered using genetic engineering techniques. Organisms that have been genetically modified include micro-organisms such as bacteria and yeast, insects, plants, fish, and mammals. GMOs are the source of genetically modified foods, and are also widely used in scientific research and to produce goods other than food. The term GMO is very close to the technical legal term, 'living modified organism' defined in the Cartagena Protocol on Biosafety, which regulates international trade in living GMOs (specifically, "any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology").
    This article focuses on what organisms have been genetically engineered, and for what purposes. The article on genetic engineering focuses on the history and methods of genetic engineering, and on applications of genetic engineering and of GMOs. Both articles cover much of the same ground but with different organizations (sorted by organism in this article; sorted by application in the other). There are separate articles on genetically modified crops, genetically modified food, regulation of the release of genetic modified organisms, and controversies.


    Production

    Genetic modification involves the mutation, insertion, or deletion of genes. When genes are inserted, they usually come from a different species, which is a form of horizontal gene transfer. In nature this can occur when exogenous DNA penetrates the cell membrane for any reason. To do this artificially may require attaching the genes to a virus or just physically inserting the extra DNA into the nucleus of the intended host with a very small syringe, with the use of electroporation (that is, introducing DNA from one organism into the cell of another by use of an electric pulse) or with very small particles fired from a gene gun. However, other methods exploit natural forms of gene transfer, such as the ability of Agrobacterium to transfer genetic material to plants, or the ability of lentiviruses to transfer genes to animal cells
                          Detailed GMO VIDEO:

     

    © 2013. All rights resevered. Share on

    Call of Duty Black Ops 15th Prestige Google+