Windows 10 will be made an automatic “recommended” update early next year

The Windows 10 free upgrade program has so far concentrated on those Windows 7 and 8 users who reserved their copy in the weeks leading up to the operating system’s release. Over the coming months, Microsoft will start to spread the operating system to a wider audience. The Windows 10 upgrade will soon be posted as an “Optional Update” in Windows Update, advertising it to anyone who examines that list of updates.

Then, early next year, it will be categorized as a “Recommended Update.” This is significant, because it means that systems that are configured to download and install recommended updates—which for most people is the safest option—will automatically fetch the upgrade and start its installer. The installer will still require human intervention to actually complete—you won’t wake up to find your PC with a different operating system—but Windows users will no longer need to actively seek the upgrade.

This mirrors an accidental change that Microsoft did earlier this month. The Windows 10 upgrade was showing up for some people as a recommended update and the installer started automatically.

That surprise change wasn’t very popular, so why is Microsoft going to do the same thing again? Terry Myerson, executive vice president of the Windows and Devices group, told us that Microsoft has fielded a huge number of support requests from people running Windows 7 and Windows 8 who want the upgrade but for one reason or another did not opt in to the reservation system. Pushing the upgrade out through Windows Update for everyone will make it a lot more accessible. The upgrade notifications will also be made clearer and more compelling. Myerson’s belief is that by communicating this plan before the change is made, the unhappiness that the accidental change provoked can be avoided. Anyone who doesn’t want the upgrade will have plenty of time to disable automatic updates between now and the new year.

Microsoft is also going to release an improved version of its Media Creation Tool that’s used for creating bootable DVDs and USB keys to install Windows 10. The tool will soon support the creation of universal install media, capable of installing both the 32- and 64-bit versions of the operating system, in both its Home and Pro versions. This will allow people with multiple systems to use a single USB stick to upgrade all of them, regardless of their configuration.

Finally, Redmond will soon start what it calls an “experiment” to bring users of pirated Windows (or “non-Genuine” Windows, in Microsoft’s terminology) into the fold. Myerson writes that Microsoft has seen many efforts by non-Genuine users to try to upgrade to Windows 10, and claims that in many cases the users have resorted to buying Windows 10 online. To embrace these users, Microsoft will add easy access to the Windows Store to provide a streamlined route to buy Windows 10. If this experiment is successful, it will be expanded to other markets.

Listing image by Microsoft

Fight the future: Ars readers say “NO” to the Internet of Things

It seems Ars readers are not ready to welcome our new IoT overlords.

Today, Cyrus Farivar and I hosted a live chat about the future of the Internet of Things (IoT). We didn’t get to all the questions that were posted within the time that we had, and we barely scraped the surface on the topic’s many angles. But the response of Ars readers seemed almost universal—’we do not want Internet in our stuff, thanks.’

“My refrigerator is supposed to keep things cold, [and] it does not need an Internet connection to do so,” said Ars reader ProfessorGuy.  Reader Gmerrick concurred. “Quite frankly IoT has zero place in my life. I have no requirement to have a connected coffee pot, or know when I am running out of eggs, or low on milk. I think I have the brain power and the MK1 eyeball to manually do this stuff. the incessant need to put a computer of some sort into stuff is solutions hunting for problems to solve.”

Of course, the major feature of IoT that some readers cued in on was how it was a tool for companies to create vendor lock-in. “Even if we charitably focus on the stuff that might actually be handy (if mostly not worth the price at present) like ‘smart’ lighting and such,” wrote fuzzyfuzzyfungus, “the state of the market is an absolutely ghastly morass of various players jockying to build walled gardens that ‘interoperate’ as long as you never leave them, devices that are ‘smart’ only in the sense that they include a lousy mobile app; but are wholly unsuitable for any sort of useful integration; and devices that are built on the assumption (generally not modifiable without deep firmware modification) that the vendor will, forever, be the aggregation and command-and-control center of the operation, and you’ll just have an account with them.”

Creeping in quietly

However, IoT may be creeping into people’s lives in ways they’re not aware of. Some commenters on Twitter pointed out the IoT benefits for healthcare, particularly when it comes to monitoring the elderly—a field where IoT has already taken hold in some forms. Others saw the benefit of manufacturers having the ability to monitor and patch some devices, particularly major appliances, as part of their long-term support.

We talked about how artists and people in the maker community have already done innovative things with adapting toys and other devices with Arduino, Raspberry Pi, and other simple computers (including the Bearduino project we wrote about in 2013). Now companies such as Mattel are working to embed artificial intelligence into toys like Hello Barbie, and soon other devices we interact with could draw on cloud computing to make them more intelligent (like Siri, Cortana, and “OK Google” do today). And aspects from devices like the Kinect controller for the Xbox could soon find their way into more household products, connecting them to the Internet for features that might not have an obvious “Internet” role.

For these devices to be accepted longterm, readers said they had to have some sort of long-term support. “I don’t expect a manufacturer to support consumer devices forever,” wrote Orange Crush, “but I do think they need to be designed to last beyond the support period. My device shouldn’t just stop working or major pieces of functionality get disabled the minute the manufacturer decides they’re obsolete.”

In order to provide that sort of capability, IoT technology needs to be reliable enough that a patch won’t blow it up. It must also receive patch downloads in a timely fashion. As we mentioned in our feature, a big piece of that reliability may have to come from distributed delivery networks and intelligence closer to each device in order to cut down the latency of connections between devices and the software in the cloud meant to assist.

One reader asked about the potential for adding IoT support to existing appliances, but that doesn’t appear to be something that’s in the immediate future. Based on conversations with experts in the field, it’s more likely that IoT will slip into new appliances slowly as companies become confident that it will work. That may not be until after cellular carriers show that they can support connections more securely—and after manufacturers are certain that the costs are worth it.

While there have been trials of IoT technology to collect telemetry from more expensive appliances and other systems, the cost of cellular connections and the questionable security and reliability of other network connectivity have kept manufacturers thus far from mass adoption (let alone retrofitting). For now, the promise touted by IBM years ago of your refrigerator calling the repairman before it fails is probably still out of reach for the foreseeable future.

The Internet of Teddy Ruxpins: a vision of the future?

Given the scrutiny drones are getting from government and the recent legislative attention that car hacking brought the auto industry, it would seem likely that the Federal Trade Commission and other government bodies will be keeping a close eye on how IoT technology gets applied to consumer devices. The Food and Drug Administration’s oversight of medical devices has certainly slowed the adoption of new technology, but it hasn’t stopped it. And federal attention to critical infrastructure security may get government tied up in IoT in industrial settings if the industry doesn’t do a good job of managing the issues of privacy and security itself.

Xen patches 7-year-old bug that shattered hypervisor security

For seven years, Xen virtualization software used by Amazon Web Services and other cloud computing providers has contained a vulnerability that allowed attackers to break out of their confined accounts and access extremely sensitive parts of the underlying operating system. The bug, which some researchers say is probably the worst ever to hit the open source project, was finally made public Thursday along with a patch.

As a result of the bug, “malicious PV guest administrators can escalate privilege so as to control the whole system,” Xen Project managers wrote in an advisory. The managers were referring to an approach known as paravirtualization, which allows multiple lower-privileged users to run highly isolated computing instances on the same piece of hardware. By allowing guests to break out of those confines, CVE-2015-7835, as the vulnerability is indexed, compromised a core tenet of virtualization. It comes five months after a similarly critical bug was disclosed in the Xen, KVM, and native QEMU virtual machine platforms.

“The above is a political way of stating the bug is a very critical one,” researchers with Qubes OS, a desktop operating system that uses Xen to secure sensitive resources, wrote in an analysis published Thursday. “Probably the worst we have seen affecting the Xen hypervisor, ever. Sadly.”

Thursday’s disclosure comes a few weeks after Xen Project managers privately warned a select group of predisclosure members of the vulnerability. That means Amazon and many other cloud services have already patched the vulnerability. It would also explain why some services have recently required customers to restart their guest operating systems. Members of Linode, for instance, received e-mails two weeks ago notifying them of Xen security advisories that would require a reboot no later than October 29, when the updates would go live. An Amazon advisory, meanwhile, said the update required no reboot.

“Really shocking”

The Qubes OS analysis criticized the development process that allowed a bug of such high severity to persist for such a long time. It also questioned whether it was time for Xen developers to redesign the hypervisor to do away with paravirtualized virtual machines. Qubes researchers wrote:

Admittedly this is subtle bug, because there is no buggy code that could be spotted immediately. The bug emerges only if one looks at a bigger picture of logic flows (compare also QSB #09 for a somehow similar situation).

On the other hand, it is really shocking that such a bug has been lurking in the core of the hypervisor for so many years. In our opinion the Xen project should rethink their coding guidelines and try to come up with practices and perhaps additional mechanisms that would

not let similar flaws to plague the hypervisor ever again (assert-like mechanisms perhaps?). Otherwise the whole project makes no sense, at least to those who would like to use Xen for security-sensitive work.

The vulnerability affects Xen version 3.4 and later, but only on x86 systems. ARM systems are not susceptible. Only paravirtualization guests can exploit the bug, and it doesn’t matter if the guests are running 32-bit or 64-bit instances. Now that the vulnerability has gone public, it’s a fair bet that unpatched systems will be exploited. Anyone relying on Xen who has not yet updated should install the patch as soon as possible.

Ars UNITE: Join us to talk about the future of the Internet of Things

Have you ever checked how many eggs you have in the fridge from the grocery store? You will someday… maybe. But IoT will find its way into your life in more subtle ways.

It’s the fourth day of Ars Unite, our week-long virtual conference on the future of technology. Today we’re talking about the “Internet of Things” (IoT)—the fusion of networking, embedded computing technology, cloud computing, sensors, and electronic controls that is gradually working its way into nearly every aspect of daily life. A new wave of technologies are making the physical and digital worlds more closely connected, for better or worse. We’ll be talking about the future of IoT technology, the challenges that need to be overcome to make sure it doesn’t kill us all, and more on our YouTube channel at 1pm Eastern Time.

You may already have IoT technology in your house and not know it. if you have a “smart” electric meter, cable television, a broadband router, or a “connected car,” congratulations! You’re already touched by the Internet of Things, technically speaking. But as we discussed in our feature this morning, there are three major areas of concern that need to be addressed as IoT scales toward its expected size of over 50 billion devices in the next five years: security, privacy, and reliability. In many ways, these three concerns are all closely connected.

One of the biggest concerns about IoT is that connected devices don’t get the same sort of security patches that the types of things we usually think about being connected to the Internet—PCs, smartphones, and servers—get regularly. The software dependencies of embedded devices are just as complex as they are for larger systems in many ways, and patching bugs without breaking things requires time and money. But it also requires that systems be built to be patchable in the first place.

For cheap consumer IoT devices, it’s often not economically practical for the companies that sell the devices to make them updatable or continue to provide software fixes once they are past their implied warranty period. For larger industrial systems, patching runs against decades of engineering culture and requires a great deal of care to execute without leaving systems that are critical to the economy vulnerable to someone abusing the interface used to patch them. Unfortunately, in many domains, even basic security measures haven’t been taken to protect devices—they’re just exposed naked to the Internet.

I’ll be joined today by Ars’ Senior Business Editor Cyrus Farivar, who has covered some of the policy implications of IoT (and drones in particular) to discuss both the potential plusses and minuses of IoT in the near future. We’ll talk about everything from connected toys to multi-billion dollar aerial sensors gone wild. Be sure to join us, and we’ll take on your questions and comments.

The future is the Internet of Things—deal with it

Welcome to Ars UNITE, our week-long virtual conference on the ways that innovation brings unusual pairings together. Today, we examine the inevitable, growing Internet of Things and the security concerns we’ll all need to consider. Join us this afternoon at 1pm Eastern (10am Pacific) for a live discussion on the topic with article author Sean Gallagher and his expert guest; your comments and questions are welcome.

Even before there was a World Wide Web, there was an Internet of Things.

In 1991, a couple of researchers at the University of Cambridge Computer Lab set out to solve the problem of making fruitless quests through the building to a shared coffee pot in the Lab’s Trojan Room. Using a video camera, a frame grabbing card, and a Motorola 68000 series-based computer running VME, they created a networked sensor that could show the current state of the pot. First configured as an X-Windows application, the Trojan Coffee Pot server was converted to HTTP in 1993, becoming one of the early stars of the Internet. It was soon joined by other networked sensors, including a number of hot tubs.

Today, millions of devices expose what they see, hear, and otherwise sense to the Internet. And thanks to cheap embedded systems, they don’t need an old VME or Windows box to do it. Billions of other devices that defy the usual definition of “computer” are communicating over networks, almost entirely with other machines. These “Internet of Things” (IoT) devices send telemetry to and receive instructions from software both nearby and on far-flung servers. Software and sensors are controlling more of what once was done by humans, often more efficiently, conveniently, and cheaply.

This practice is changing how we interact with the physical world. We talk to our televisions and they listen, thanks to embedded sensors and voice processing chips that can tap into the cloud for corrections. We drive down the road and sensors gather data from our cell phones to measure the flow of traffic. Our cars have mobile apps to unlock them. Health devices send data back to doctors, and wristwatches let us send our pulse to someone else. The digital has become physical.

It has been only eight years since the smartphone emerged, introducing the new age of always-on mobile connectivity, and networked devices now already outnumber the people on the planet. By some estimates, within the next five years, the number of devices connected to the Internet will outnumber the people on the planet by over seven to one—50 billion machines, ranging from networked sensors to industrial robots.

Inexpensive computing power, cheap or free connectivity, and the relative ease with which new software and chips are making connecting will make it possible for governments, companies, and even individuals to collect detailed data from IoT devices and automate them in some way. It will be the things’ Internet; we’ll just be living in it.

But given the state of IoT today, that might be a bumpy tenancy if certain issues aren’t ironed out now. Security, privacy, and reliability concerns are the main barriers to a sudden arrival of some singularity where we all live as happy cogs in an IoT machine world. So how will the human social order take to a world of persistent networked everything?

Plugging into the spew

An airplane being assembled at an Airbus facility. The company is developing “smart tools” that use local and network intelligence as part of its “factory of the future” initiative.

  1. An airplane being assembled at an Airbus facility. The company is developing “smart tools” that use local and network intelligence as part of its “factory of the future” initiative.

  2. An Airbus worker alongside a two-armed robot. IoT-enabled tooling is being developed to help humans collaborate with robots without having to think about it.

  3. Pattern recognition and tracking demonstrations for AIrbus’ smart tool development.

  4. This data overlay in a lab at GE Software is based on sensor data from Hydro Quebec, showing potential sites for outages based on weather data.

  5. A redacted list of some IoT devices (in this case, Schneider Electric PLC industrial controls connected to Ethernet) visible to the naked Internet and catalogued by the Shodan search engine.

  6. A prototype of GrowBox, an IoT hydroponic system that uses sensors to optimize growth of… tomatoes.

  7. The US Army has developed networked sensors in helmets to measure concussive forces soldiers are exposed to in an effort to help protect them from brain injuries.

The promise of IoT is “smart” everything. Nest’s Internet-connected Learning thermostat, Nest Cam surveillance camera, and Protect networked smoke alarm promise a more energy-efficient, safer home. IoT technology is a key part of the pitch for “smart cities,” “smart buildings,” “smart factories,” and just about every other “smart” proposal from sensor manufacturers, networking companies, and big technology consultancies. Seemingly everyone is looking for a piece of the biggest potential collection of integration projects ever. Sometimes the “smart” is relatively close to the sensor itself, but it often relies on a remote cloud service or data center to process the information and control actions.

On the consumer side, while devices like Nest’s get much of the attention, wearable IoT devices are just starting to take off—despite the relatively low impact so far of high-profile efforts like the Apple Watch. “The Apple Watch may be on a slower liftoff cycle than other recent Apple hardware launches, but it has a complex number of use cases which are finding their home, purpose, and meaning,” said Mark Curtis, the chief client officer at Fjord, Accenture’s design consultancy. Within the next two to three years, he predicted, wrist-based devices will lose the need to be tethered to a smartphone. “At the same time, interactions between wearables and nearables (e.g., beacons, Amazon Echo, connected cars) will grow.”

The health field is the most immediate fit for wearables, because they can gather data that has a benefit without conscious human action. “A good example is our Fjord Fido diabetes platform,” Curtis said. “It requires complex linking between devices and data but would not have been possible without a smartwatch.”

Governments are especially interested in the analytical powers of IoT-collected data for all sorts of reasons, from tuning services at the most basic levels to understanding how to respond in an emergency—as well as collecting revenue. Traffic lights and even pedestrian crossing buttons could be used as networked sensors, said Michael Daly, chief technology officer for Raytheon Cybersecurity and Special Missions. “You could see how many times is this being used and how long people are waiting to cross, then adjust traffic flow accordingly,” he said.

Industry is equally interested in the data that can be tapped into by IoT, and more companies are examining the benefits of using the embedded intelligence and network connectivity of IoT devices to improve their own systems and products. In most of these applications, National Instruments Executive Vice President Eric Starkloff told Ars, companies are most interested in instrumenting their operations, “looking for events that are a warning of impending failure” in systems or squeezing additional efficiency out of their operations. So far, only a small fraction of industrial systems have network-based telemetry gathering, and Starkloff said that the greatest opportunities for growth over the next five years are in “brown field” applications. These are instances of simply upgrading or enhancing existing hardware in factories, refineries, office buildings and other physical plants with IoT goodness.

Manufacturing companies have been among the earliest adopters of IoT. General Electric has pushed forward its own massive internal investment in IoT technology to collect analytic data from everything from gas turbine engines to locomotives. IoT is also part of the “factory of the future” concept embraced by aircraft manufacturer Airbus, where National Instruments is helping the company put “smart IoT technologies into their smart tooling and robotics systems that work alongside human operators,” according to Starkloff.

Airbus’ IoT interest is as much about ensuring the precision of the company’s manufacturing as it is about sensing potential problems. “Today they put planes together mostly manually,” Starkloff said. “They want to move to the point where tools are intelligent—where a tool knows whether a rivet was put in correctly.” To do that, the analytics tracking system performance “has to be close, not up in cloud,” he explained. “They need devices communicating locally—smart tooling connected to smart wearables, such as glasses with a heads-up display.”

In a way, Airbus’ vision mirrors one that Boeing attempted in the 1990s with augmented reality (one the company has continued to invest in ever since). It’s also similar to some of the methods of tying IoT technology to augmented reality visualization we saw at GE Software earlier this year, where technicians could be directed to equipment needing service in a manufacturing environment and stepped through the process with visual cues. But Airbus’ setup also includes using IoT technology to communicate between human operated tools and robotic systems, passing data over a local network to allow machines and humans to work collaboratively.

The Department of Defense has similar designs on IoT, though the systems that the DOD wants to enhance are often soldiers themselves. Embedded and wearable systems are turning soldiers into nodes on the DOD network, both to enhance their battlefield performance and to track their well-being. Aside from the work on autonomous drones and other sensors, the Army has developed networked helmet sensors that can help detect the severity of concussive blows (a bit of tech that the NFL has moved to adopt as well). The military, through a number of DARPA projects and other labs, continues to develop wearable technologies that will allow soldiers to interact with other systems.

At a recent conference sponsored by the Army’s Training and Doctrine Command (TRADOC), scientists discussed the possibility of “implanted” sensors that could communicate what a soldier was doing without the soldier having to consciously communicate it. Thomas F. Greco, director of intelligence at TRADOC, said that IoT technology coupled with wearable sensors could result in a “precision of knowing,” reducing ambiguity on the battlefield and allowing commanders to have absolute knowledge of what troops were doing. But he also said that having that kind of data could affect the order and discipline of soldiers. “Ambiguity is a kind of lubricant in personal relationships,” he said, wondering how that would change “when you have total knowledge and accountability.”

Still fuming over HTTPS mishap, Google makes Symantec an offer it can’t refuse

Google has given Symantec an offer it can’t refuse: give a thorough accounting of its ailing certificate authority process or risk having the world’s most popular browser—Chrome—issue scary warnings when end users visit HTTPS-protected websites that use Symantec credentials.

The ultimatum, made in a blog post published Wednesday afternoon, came five weeks after Symantec fired an undisclosed number of employees caught issuing unauthorized transport layer security certificates. The mis-issued certificates made it possible for the holders to impersonate HTTPS-protected Google webpages.

Symantec first said it improperly issued 23 test certificates for domains owned by Google, browser maker Opera, and three other unidentified organizations without the domain owners’ knowledge. A few weeks later, after Google disputed the low number, Symantec revised that figure upward, saying it found an additional 164 certificates for 76 domains and 2,458 certificates for domains that had never been registered. The mis-issued certificates represented a potentially critical threat to virtually the entire Internet population because they made it possible for the holders to cryptographically impersonate the affected sites and monitor communications sent to and from the legitimate servers.

“It’s obviously concerning that a CA would have such a long-running issue and that they would be unable to assess its scope after being alerted to it and conducting an audit,” Ryan Sleevi, a software engineer on the Google Chrome team, wrote in the blog post.

He went on to require that, beginning in June, Symantec publicly log all certificates it issues or risk having Chrome flag them as potentially unsafe. Currently, under the Chrome certificate transparency policy, Symantec and all other Chrome-trusted CAs must log all extended validation certificates—that is, TLS credentials that certify a site is owned by a specific organization, such as PayPal, Microsoft, or Bank of America. Beginning June 1, Symantec will be required to log all certificates, not just those with the extended validation flag.

In language that was uncharacteristically stern, Sleevi continued:

After this date, certificates newly issued by Symantec that do not conform to the Chromium Certificate Transparency policy may result in interstitials or other problems when used in Google products.

More immediately, we are requesting of Symantec that they further update their public incident report with:

  1. A post-mortem analysis that details why they did not detect the additional certificates that we found.
  2. Details of each of the failures to uphold the relevant Baseline Requirements and EV Guidelines and what they believe the individual root cause was for each failure.

We are also requesting that Symantec provide us with a detailed set of steps they will take to correct and prevent each of the identified failures, as well as a timeline for when they expect to complete such work. Symantec may consider this latter information to be confidential and so we are not requesting that this be made public.

Following the implementation of these corrective steps, we expect Symantec to undergo a Point-in-time Readiness Assessment and a third-party security audit. The point-in-time assessment will establish Symantec’s conformance to each of these standards:

  • WebTrust Principles and Criteria for Certification Authorities
  • WebTrust Principles and Criteria for Certification Authorities – SSL Baseline with Network Security
  • WebTrust Principles and Criteria for Certification Authorities – Extended Validation SSL

The third-party security audit must assess:

  • The veracity of Symantec’s claims that at no time private keys were exposed to Symantec employees by the tool.
  • That Symantec employees could not use the tool in question to obtain certificates for which the employee controlled the private key.
  • That Symantec’s audit logging mechanism is reasonably protected from modification, deletion, or tampering, as described in Section 5.4.4 of their CPS.

We may take further action as additional information becomes available to us.

Symantec has issued a statement in response. It reads:

In September, we were alerted that a small number of test certificates for Symantec’s internal use had been mis-issued. We immediately began publicly investigating our full test certificate history and found others, most of which were for non-existent and unregistered domains. While there is no evidence that any harm was caused to any user or organization, this type of product testing was not consistent with the policies and standards we are committed to uphold. We confirmed that these test certificates have all been revoked or have expired, and worked directly with the browser community to have them blacklisted. To prevent this type of testing from occurring in the future, we have already put additional tool, policy and process safeguards in place, and announced plans to begin Certificate Transparency logging of all certificates. We have also engaged an independent third-party to evaluate our approach, in addition to expanding the scope of our annual audit.

The prospect of Chrome flagging every newly issued TLS certificate is sure to strike fear in the hearts of Symantec executives, since potential customers would almost surely choose a competing CA whose credentials don’t get this treatment. The demand for a “point-in-time readiness assessment,” meanwhile, can be seen as the certificate-authority equivalent of a misbehaving student being sent to the principal’s office. Generally, such assessments are required for CAs to become accredited in the first place. And while CAs are required to undergo a security audit every year or so, the added requirements spelled out by Sleevi are likely to make the next audit cost additional money and effort.

The message is clear. Too many certificate authorities—whether they’re the China Network Information Center, the French cyberdefense agency known as ANSSI, India’s National Informatics Centre, or the now defunct Dutch CA DigiNotar—have been allowed to get away with too much for too long. Google is using its considerable influence as the maker of the world’s most popular browser to warn them that there will be some extremely unpleasant consequences for future violations (though in fairness, some argue that Google would have taken this approach even if Chrome had a smaller market share).

Post updated to add comment from Symantec.

DOD radar blimp breaks loose, takes out power lines in 160-mile flight [Updated]

One of the two JLENS aerostats on the ground at Aberdeen Proving Ground, Maryland. Two aerostats make up a JLENS “orbit”.

  1. One of the two JLENS aerostats on the ground at Aberdeen Proving Ground, Maryland. Two aerostats make up a JLENS “orbit”.

  2. The large radome on the bottom of the JLENS aerostat on the loose carries a large phased-array search radar. The other aerostat carries a fire control radar that can provide targeting information for air defense missiles and aircraft.

  3. The tether for the aerostat, which is supposed to both keep the balloon from wandering off and carry power and networking connections, is an 1 1/8 inch thick Vectran wrapped set of cables.

  4. Raytheon

    The location of the pair of JLENS aerostats is supposed to provide early air defense warning against low-flying threats to the National Capitol region, as well as much of the East Coast.

  5. Raytheon

    According to Raytheon, the search and targeting radar of JLENS could cover an area the size of Texas, from North Carolina to Massachusetts.

  6. But the JLENS wasn’t operational the day an actual low-flying threat—a postman-flown autogyro—landed on the Capitol lawn.

One of the two tethered aerostats that make up the Joint Land Attack Cruise Missile Defense Elevated Netted Sensor System (JLENS), broke loose from its moorings today and drifted across the skies of Maryland and  Pennsylvania, before coming down to earth 160 miles away. Two Air National Guard F-16 fighters were scrambled to monitor its movements, while its trailing tether took out power lines in Pennsylvania, causing blackouts across the state.

JLENS’ twin aerostats are (or were) supposed to provide airborne early warning and targeting of low-flying airborne threats coming in from the Atlantic, covering a radius of 300 miles with their look down search and targeting radar. They have been the subject of much controversy because of the cost of the program; a recent Los Angeles Times report called the $2.7 billion dollar project delivered by Raytheon a “zombie” program: “costly, ineffectual and seemingly impossible to kill.”

The twin white balloons with their radomes are usually visible from Baltimore and much of surrounding Maryland, flying on tethers at sites in Baltimore County and Harford County near the Army’s Aberdeen Proving Ground.The 242-foot long JLENS aerostats are designed to operate at altitudes of up to 10,000 feet, and can stay aloft for up to 30 days at a time before being retrieved for maintenance. The tethers, made of Vectran (a substance similar to kevlar), are 1 1/8 inches thick, and are designed to withstand 100 mile-per-hour winds. However, the Harford County tether, near Aberdeen Proving Grounds’ Edgewood Arsenal facility, broke today, about halfway up to the JLENS aerostat, allowing the unmanned, unpowered blimp to be carried off while trailing 6,700 feet of cable. High winds during a storm that passed through the Baltimore region, or perhaps wind shear associated with the storms, snapped the tether just after noon local time, setting the aerostat adrift.

The tether is designed to withstand 100 mile per hour winds, according to Raytheon, and had withstood a 106 mile-per-hour wind in a storm the system was exposed to accidentally during testing. The Army was standing by to pull the JLENS balloons down to prevent damage last month when Hurricane Joaquin threatened the mid-Atlantic region. The North American Air Defense Command (NORAD) and the FAA are coordinating tracking of the drifting aerostat and routing air traffic around it, according to NORAD spokesman Michael Kucharek, who told the Baltimore Sun that NORAD was working with other agencies “to address the safe recovery of the aerostat.”

In the meantime, the aerostat has left a trail of destruction as its flight path approaches higher terrain.The cables dangling from the blimp took out power lines in Pennsylvania, resulting in massive power outages across the the eastern part of the state, affecting Lancaster, Harrisburg, and much of the Poconos region. The wanderings of the JLENS have become the subject of multiple Twitter accounts, including @noradblimp and @bmoreblimp.

@ABC@GMA Landed in bloomsburg right by my school. Knocked out the power at CMVT.

— Fisher P Creasy (@FPCreasy) October 28, 2015

At about 4:30 Eastern time, the aerostat came to ground in Moreland Township, Pennsylvania, having apparently lost helium during its ordeal. A Pennsylvania State Police spokesperson said that the balloon had been “contained,” and the military was moving in to recover it.

13 million plaintext passwords belonging to webhost users leaked online

A security researcher has discovered a trove of more than 13 million plaintext passwords that appear to belong to users of 000Webhost, a service that says it provides reliable and high-speed webhosting for free.

The leaked data, which also includes users’ names and e-mail addresses, was obtained by Troy Hunt, an Australian researcher and the operator of Have I Been Pwned?, a service that helps people figure out if their personal data has been exposed in website breaches. Hunt received the data from someone who contacted him and said it was the result of a hack five months ago on 000Webhost.

Hunt has so far confirmed with five of the people included in the list that it contains the names, passwords, and IP addresses they used to access 000Webhost. “By now there’s no remaining doubt that the breach is legitimate and that impacted users will have to know,” he wrote in a blog post published Wednesday. He said that he worked hard to notify company officials and get them to publicly warn users that their passwords have been exposed. So far, all that’s happened, he said, is that the service has notified users who log in that their passwords have been reset “by 000Webhost system for security reasons.”

Update Oct 28, 2015 11:04am PDT: In a Facebook post published Wednesday morning, 000Webhost officials confirmed the breach and said it was the result of hackers who exploited an old version of the PHP programming language to gain access to 000Webhost systems. The advisory makes no reference to the plaintext passwords, although it does advise users to change their credentials. Hunt has also encountered evidence the breach may extend to other Web hosting providers, presumably because of partnerships they had with 000Webhost.

Hunt uncovered a variety of weaknesses, including the use of unencrypted HTTP communications on the login page and a code routine that placed a user’s plaintext password in the resulting URL. That means the unobfuscated passwords were likely written to all kinds of administer logs. It’s also possible that the site didn’t follow standard industry practices and cryptographically hash the passwords when storing them. In any event, the data may have been accessed by executing a SQL injection exploit or other common website attack or by an insider with privileged access to the 000Webhost system.

As password leaks go, 13 million is a large number, but it’s still dwarfed by some of the biggest breaches. The recent compromise of clandestine affairs website Ashley Madison, for instance, spilled 34 million passwords. Then again, Ashley Madison administrators went through the trouble of hashing passwords using the extremely onerous bcrypt function (although a critical programming error ultimately made it possible to crack 11 million passwords). Although not perfect, the measure gave Ashley Madison users time to change their passwords and required a fair amount of effort on the part of crackers. By contrast, the availability of plaintext passwords here makes it easy to abuse the passwords and means that even extremely strong passcodes are instantly available.

Anyone who has used 000Webhost should be on the alert for fraud. In the event that users have used the same or a similar password on other websites, they should change it immediately. The fresh infusion of 13 million passwords into the already massive corpus of existing passwords should bring new urgency to the oft-repeated admonition to use a long, randomly generated password that’s unique to every site. Advice on how to do that is here.

MIT uses wireless signals to identify people through walls

RF-Capture uses wireless signals to take snapshots of human bodies, even if they’re in a different room.

MIT’s Computer Science and Artificial Intelligence Lab is developing a device that uses wireless signals to identify human figures through walls. Called RF-Capture, the technology “can trace a person’s hand as he writes in the air and even distinguish between 15 different people through a wall with nearly 90 percent accuracy,” MIT said in an announcement today.

MIT said the technology could have at least a few real-world applications. It could work in virtual reality video games, “allowing you to interact with a game from different rooms or even trigger distinct actions based on which hand you move.” RF-Capture could also assist in motion capture for movie production without requiring actors to wear body sensors.

MIT is “working to turn this technology into an in-home device that can call 911 if it detects that a family member has fallen unconscious,” said Dina Katabi, director of the Wireless@MIT center. “You could also imagine it being used to operate your lights and TVs, or to adjust your heating by monitoring where you are in the house.”

How it works


RF-Capture is “the first system that can capture the human figure when the person is fully occluded (i.e., in the absence of any path for visible light),” MIT researchers said in a paper that was accepted for the SIGGRAPH Asia conference next month. RF-Capture uses a compact array of 20 antennas, transmitting wireless signals while “reconstruct[ing] a human figure by analyzing the signals’ reflections,” MIT said. Its transmit power is just 1/1,000 of that needed by Wi-Fi signals, while operating at frequencies between 5.46GHz and 7.24GHz. These frequencies are lower than those used in X-ray, terahertz, and millimeter-wave systems, allowing the signals to penetrate walls.

Using these frequencies, which have some overlap with Wi-Fi, allows the system to rely on “low-cost massively-produced RF components,” the paper said.


RF-Capture uses a “coarse-to-fine algorithm” that scans 3D space to find RF reflections of human limbs, generating 3D snapshots of the reflections. Multiple snapshots are stitched together to recreate a human figure.

The antenna array only captures a subset of the RF reflections off the human body. But as a person walks, it can analyze different points on the body and trace a full figure.

RF-Capture is able to distinguish five human figures with 95.7 percent accuracy and 15 people with 88.2 percent accuracy, according to MIT. It can identify which body part a person is moving with 99.13 percent accuracy when the person behind the wall is three meters away from the system, and 76.4 percent accuracy when the person is eight meters away.

“Finally, we show that RF-Capture can track the palm of a user to within a couple of centimeters, tracing letters that the user writes in the air from behind a wall,” researchers wrote.

Researchers compared RF-Capture’s results with the output of a Microsoft Kinect skeletal tracking system. While the Kinect performs better—with the benefit of being in the same room as the human subject—RF-Capture located body parts with a median error of 2.19 centimeters. In 90 out of 100 experiments, RF-Capture tracked the body part to within 4.84 centimeters.

There are clear limits in the current version, though.

“First, our current model assumes that the subject of interest starts by walking towards the device, hence allowing RF-Capture to capture consecutive RF snapshots that expose various body parts,” the researchers’ paper said. “Second, while the system can track individual body parts facing the device, such as a palm writing in the air, it cannot perform full skeletal tracking. This is because not all body parts appear in all RF snapshots. We believe these limitations can be addressed as our understanding of wireless reflections in the context of computer graphics and vision evolves.”

Sit? Stand? Nifty new workstation lets you lie down on the job

Video produced by Jennifer Hahn.

The debate over the health impact of working at a computer continues to rage. Standing desk fans insist that being on their feet is the way to go for health and productivity, but for many of the rest of us, standing up for hours on end looks like an awful lot of hard work. The science isn’t exactly clear-cut, either.

California startup Altwork has what may be the solution with its first product: the Altwork Station. While adjustable sit/stand desks have been done before, the Altwork Station takes things to the next level: it’s an integrated workstation combining seat, desk, and monitor stand, and it’s all electrically controlled to support not just sitting and standing but also a supine position: you lie back with your monitor or monitors above you. The keyboard and mouse stay affixed to your desk through the magical power of magnets.

I recently gave a prototype version of the Station a quick spin and came away intrigued and quite impressed. The flexibility is very compelling. I’m one of those annoying people who likes to pace incessantly while on the phone, so the ability to put the workstation into standing mode at the touch of a button is useful, and the laid back posture is extremely comfortable. It will probably take a little time to get used to—not least because I found myself expecting the keyboard and mouse to fall off the desk, even though they didn’t. I could certainly envisage getting lots of work done like that—and a lot of gaming, too. In standing mode, it’s easy to swing the screen around to show it to other people, making ad hoc deskside presentations and collaboration easy and accessible.

And let’s be honest: lying back with an array of monitors around you (the monitor arm supports up to 35lbs and standard VESA mounts, so triple head is no problem) in a chair that is purpose-built for hardcore computing feels a bit sci-fi. It’s the kind of thing that you’d expect to see in a movie, the sort of awesome setup that the bad guy hacker is using—probably with a keyboard for each hand—to reprogram the laser satellite while simultaneously making a nuclear power station melt down.

The Station has a programmable memory to allow you to define the exact positions that you prefer; if you like to sit bolt upright, you can. Prefer a more semi-recumbent posture, no matter how indecorous? Not a problem. The motors do the work for you, and the position of the chair can be changed and adjusted at the touch of a button.


You can go from standing…

  1. Altwork

    You can go from standing…

  2. Altwork

    … to sitting…

  3. Altwork

    … to laid right back, all at the touch of a button.

Altwork is aiming the Station at computer-using professionals—software developers, finance, CAD users, and so on—who have to use computers for extended periods. While the Station is a piece of furniture, CEO Che Voight told us that the company sees it as more of a tool: a productive, functional object that’s purpose-built to help these people do their jobs. We have seen other workstations with a similar kind of concept—the MWE Lab Emperor range is perhaps the best known—but the Station seems to do a better job both in terms of the range of its positioning options and the convenience of getting in and out.

That professional aim carries with it a professional price. The Station is available for early-adopter preorder for $3,900, and it is due to ship in mid-2016. Regular pricing will be $5,900. That’s a lot more than the Ikea Markus I currently sit on, but then again, the Altwork Station does a lot more than the Ikea Markus I currently sit on. It would be a worthy upgrade.

Listing image by Altwork