Friday, July 29, 2016

How to avoid ransomware attacks: 10 tips

As ransomware increasingly targets healthcare organizations, schools and government agencies, security experts offer advice to help IT leaders prepare and protect.

intrusion detection

Nigerian princes are no longer the only menaces lurking in an employee's inbox. For healthcare organizations, schools, government agencies and many businesses, ransomware attacks—an especially sinister type of malware delivered through spear phishing emails that locks up valuable data assets and demands a ransom to release them—are a rapidly-growing security threat.

"We're currently seeing a massive explosion in innovation in the types of ransomware and the ways it's getting into organizations," says Rick McElroy, security strategist for cyber security company Carbon Black Enterprise Response. "It's a big business, and the return on investment to attackers is there—it's going to get worse."

While ransomware has existed for years, 2015 saw a spike in activity. The FBI received 2,453 complaints, with losses of over $1.6 million, up from 1,402 complaints the year before, according to annual reports from the bureau's Internet Crime Complaint Center. And the numbers are only growing in 2016, the FBI reports.

"The Dark Web and Bitcoin allow almost anyone to sell stolen data without identification—cyber criminals understand they can make easy cash without the risk of being jailed," says Ilia Kolochenko, CEO of web security company High-Tech Bridge. And hackers—most of which are located in developing countries—are growing more sophisticated, even developing downloadable ransomware toolkits for less-experienced hackers to deploy, according to the 2016 Institute for Critical Infrastructure Technology Ransomware Report.

"The days of grammatically incorrect, mass spam phishing attacks are pretty much over," says James Scott, senior fellow and co-founder of the Institute for Critical Infrastructure Technology, and co-author of the report. Hackers can now check a victim's social media accounts, and create a fake email address pretending to be a friend or contact in order to get them to click on an infected link or attachment. "It's much more targeted, and will exploit a particular vulnerability in a device, application, server or software," Scott adds.

A typical ransom demand is $300, according to a report from security firm Symantec.

Health threats

The healthcare sector is highly targeted by hacker attacks, due to antiquated or misconfigured computer security systems and the amount of sensitive data they hold, says David DeSanto, director of projects and threat researcher at Spirent Communications.

The large number of employees at most hospitals also makes cyber security safety training difficult, DeSanto says. Experts commonly see attacks occur through spear phishing—targeted emails with attachments with names such as "updated patient list," "billing codes" or other typical hospital communications that employees may click on if not warned.

In 2015, over 230 healthcare breaches impacted the records of 500-plus individuals, according to data from the U.S. Department of Health and Human Services Office for Civil Rights.

A February ransomware attack launched against Hollywood Presbyterian Medical Center in southern California locked access to certain computer systems and left staff unable to communicate electronically for 10 days. The hospital paid a $17,000 ransom in bitcoin to the cybercriminals, says CEO Alan Stefanek.

Following security best practices can help healthcare organizations protect themselves. "The best way is to make regular backups of all systems and critical data so that you can restore back to a known good state prior to the ransomware being on the system," DeSanto says.

Without security best practices, healthcare organizations may be left with few options to retrieve information. In these cases, healthcare organizations may choose to pay the ransomware fee. Some make enough money that paying the ransom for a few infected computers is low compared to the cost of maintaining the infrastructure to protect these attacks, DeSanto adds.

Schools and businesses

Hackers are gaining traction and using new methods across other industry verticals as well. In 2014, a large European financial services company (whose name was not disclosed) discovered with the help of High-Tech Bridge that a hacker placed a back door between a web application and a data set.

For six months, the hacker encrypted all information before it was stored in a database, undetected by company staffers. Then, they removed the encryption key, crashing the application, and demanded $50,000 to restore access to the database.

However, the company did not end up paying, thanks to mistakes made by the hackers, Kolochenko says.

Other victims are not as lucky, says Engin Kirda, professor of computer science at Northeastern University. "If the ransomware hacker does the encryption well, once the data is encrypted it's nearly impossible to decrypt," he adds.

Such was the case for South Carolina's Horry County School District this February, when hackers froze networks for 42,000 students and thousands of staff. District technology director Charles Hucks tried to shut down the system, but within minutes, the attackers immobilized 60 percent of Horry County's computers. The district paid $8,500 in Bitcoin to unlock their systems.

Tips for IT leaders

To prevent a ransomware attack, experts say IT and information security leaders should do the following:

  1. Keep clear inventories of all of your digital assets and their locations, so cyber criminals do not attack a system you are unaware of.
  2. Keep all software up to date, including operating systems and applications.
  3. Back up all information every day, including information on employee devices, so you can restore encrypted data if attacked.
  4. Back up all information to a secure, offsite location.
  5. Segment your network: Don't place all data on one file share accessed by everyone in the company.
  6. Train staff on cyber security practices, emphasizing not opening attachments or links from unknown sources.
  7. Develop a communication strategy to inform employees if a virus reaches the company network.
  8. Before an attack happens, work with your board to determine if your company will plan to pay a ransom or launch an investigation.
  9. Perform a threat analysis in communication with vendors to go over the cyber security throughout the lifecycle of a particular device or application.
  10. Instruct information security teams to perform penetration testing to find any vulnerabilities.
Mitigating an attack

If your company is hacked with ransomware, you can explore the free ransomware response kit for a suite of tools that can help. Experts also recommend the following to moderate an attack:

Research if similar malware has been investigated by other IT teams, and if it is possible to decrypt it on your own. About 30 percent of encrypted data can be decrypted without paying a ransom, Kolochenko of High-Tech Bridge says.

Remove the infected machines from the network, so the ransomware does not use the machine to spread throughout your network.

Decide whether or not to make an official investigation, or pay the ransom and take it as a lesson learned.

"There is always going to be a new, more hyper-evolved variant of ransomware delivered along a new vector that exploits a newly-found vulnerability within a common-use application," Scott of ICIT says. "But there are so many technologies out there that offer security—you just have to use them."

Thursday, July 28, 2016

How to Recover Your Files From a BitLocker-Encrypted Drive

When you are hit by need to recover a drive from a BitLocker encryption, you need a key. Time to use Microsoft:

Microsoft’s BitLocker encryption always forces you to create a recovery key when you set it up. You may have printed that recovery key, written it down, saved it to a file, or stored it online with a Microsoft account. If your BitLocker drive isn’t unlocking normally, the recovery key is your only option.

There are many reasons you may get locked out of your hard drive–maybe your computer’s TPM is no longer unlocking your drive automatically, or you forget a password or PIN. This will also be necessary if you want to remove a BitLocker-encrypted drive from a computer and unlock it on another computer. If the first computer’s TPM isn’t present, you’ll need the recovery key.

First, Find Your Recovery Key

If you can’t find your recovery key, try to think back to when you set up BitLocker. You were asked to either write the key down, print it out to a piece of paper, or save it to a file on an external drive, such as a USB drive. You were also given the option to upload the BitLocker recovery key to your Microsoft account online.

That key should hopefully be stored somewhere safe if you printed it to a piece of paper or saved it to an external drive.

To retrieve a recovery key you uploaded to Microsoft’s servers, visit the OneDrive Recovery Key page and sign in with the same Microsoft account you uploaded the recovery key with. You’ll see the key here if you uploaded it. If you don’t see the key, try signing in with another Microsoft account you might have used.

If there are multiple accounts, you can use the “Key ID” displayed on the BitLocker screen on the computer and match it to the Key ID that appears on the web page. That will help you find the correct key.

If your computer is connected to a domain–often the case on computers owned by an organization and provided to employees or students–there’s a good chance the network administrator has the recovery key. Contact the domain administrator to get the recovery key.

If you don’t have your recovery key, you may be out of luck–hopefully you have a backup of all your data! And next time, be sure to write down that recovery key and keep it in a safe place (or save it with your Microsoft Account).

Situation One: If Your Computer Isn’t Unlocking the Drive at Boot

Drives encrypted with BitLocker normally unlocked automatically with your computer’s built-in TPM every time you boot it. If the TPM unlock method fails, you’ll see a “BitLocker Recovery” error screen that asks you to “Enter the recovery key for this drive”. (If If you’ve set up your computer to require a password, PIN, USB drive, or smart card each time it boots, you’ll see the same unlock screen you normally use before getting the BitLocker Recovery screen–if you don’t know that password, press Esc to enter BitLocker Recovery.)

Type your recovery key to continue. This will unlock the drive and your computer will boot normally.

The ID displayed here will help you identify the correct recovery key if you have multiple recovery keys printed, saved, or uploaded online.

Situation Two: If You Need to Unlock the Drive From Within Windows

The above method will help you unlock your system drive and any other drives that are normally unlocked during the boot-up process.

However, you may need to unlock a BitLocker-encrypted drive from within Windows. Perhaps you have an external drive or USB stick with BitLocker encryption and it’s not unlocking normally, or perhaps you’ve taken a BitLocker-encrypted drive from another computer and connected it to your current computer.

To do this, first connect the drive to your computer. Open the Control Panel and head to System and Security > BitLocker Drive Encryption. You can only do this on Professional editions of Windows, as only they provide access to the BitLocker software.

Locate the drive in the BitLocker window and click the “Unlock Drive” option next to it.

You’ll be asked to enter the password, PIN, or whatever other details you need to provide to unlock the drive. If you don’t have the information, select More Options > Enter Recovery Key.

Enter the recovery key to unlock the drive. Once you enter the recovery key, the drive will unlock and you can access the files on it. The ID displayed here will help you find the correct recovery key if you have multiple saved keys to choose from.

If your computer is displaying a BitLocker error screen each time it boots and you don’t have any way of getting the recovery key, you can always use the “reset this PC” troubleshooting option to fully wipe your computer. You’ll be able to use the computer again, but you’ll lose all the files stored on it.

If you have an external drive that’s encrypted with BitLocker and you don’t have the recovery key or any other way to unlock it, you may have to do the same thing. Format the drive and you’ll erase its contents, but at least you’ll be able to use the drive again.

Wednesday, July 27, 2016

Flaws in wireless keyboards let hackers snoop on everything you type

Many popular, low-cost wireless keyboards don't encrypt keystrokes.

This nondescript USB dongle can be used to spy on wireless keyboards from hundreds of feet away. 

Your wireless keyboard is giving up your secrets -- literally.

With an antenna and wireless dongle worth a few bucks, and a few lines of Python code, a hacker can passively and covertly record everything you type on your wireless keyboard from hundreds of feet away. Usernames, passwords, credit card data, your manuscript or company's balance sheet -- whatever you're working on at the time.

It's an attack that can't be easily prevented, and one that almost nobody thought of -- except the security researchers who found it.

Security firm Bastille calls it "KeySniffer," a set of vulnerabilities in common, low-cost wireless keyboards that can allow a hacker to eavesdrop from a distance.

Here's how it works: a number of wireless keyboards use proprietary and largely unsecured and untested radio protocols to connect to a computer -- unlike Bluetooth, a known wireless standard that's been tried and tested over the years. These keyboards are always transmitting, making it easy to find and listen in from afar with the right equipment. But because these keystrokes aren't encrypted, a hacker can read anything on a person's display, and directly type on a victim's computer.

The attack is so easy to carry out that almost anyone can do it -- from petty thieves to state-actors.

Marc Newlin, a researcher at the company who was credited with finding the flaw said it was "pretty alarming" to discover.

"A hacker can 'sniff' all of the keystrokes, as well as inject their own keystrokes on the computer," he explained on the phone this week.

The researchers found that eight out of 12 keyboards from well-known vendors -- including HP, Kensington, and Toshiba -- are at risk of eavesdropping, but the list is far from exhaustive.

The scope of the problem is so large that the researchers fully expect that "millions" of devices are vulnerable to this new attack.

Worst of all? There's no fix.

"I think a lot of consumers reasonably expect that the wireless keyboard they're using won't put them at risk, but consumers might not have a high awareness of this risk," he said.

Ivan O'Sullivan, the company's chief research officer, admitted that the ease of this attack had him unsettled. "As a consumer, I expect that the keyboard that I buy won't transmit my keystrokes in plain-text."

"We were shocked. And consumers should be, too," he said.

This isn't the first time wireless devices have put their users at risk. Bastille was the company behind the now-infamous MouseJack flaw, which let hackers compromise a person's computer through their wireless mouse. Even as far back as 2010, it was known that some keyboards with weak encryption could be easily hacked.

Over half a decade later, Newlin said he was hopeful that his research will make more people aware, but he doesn't think this problem "will be resolved."

"Most of the vendors have not responded to our disclosure information," he said. "Many of the vendors haven't responded past an acknowledgement, or they haven't responded at all to our inquiries."

Though not all wireless keyboards are created equal and many are not vulnerable to the eavesdropping vulnerability, there is an easy fix to a simple problem.

"Get a wired keyboard," the researchers said.

Tuesday, July 26, 2016

Halliburton Report: Company Loses $148 Million From Operations With Venezuela

File photo of the company logo of Halliburton oilfield services corporate offices in Houston
Oil services giant Halliburton reported losses totaling $148 million in connection with operations in Venezuela. The company agreed to take a promissory note in exchange for unpaid invoices tied to Venezuela.

PDVSA, a state oil company out Venezuela, has accrued over $19 billion in debts to providers like Halliburton. The debt grows as PDVSA takes on low oil prices and a dying socialist economy which in turn have caused some leading service companies to reduce operations.

Eulogio del Pino, president of PDVSA, said that his company had been in talks about financial agreements with Halliburton, Schlumberger NV, and Weatherford International PLC to securitize debts.

In a quarterly report, Halliburton claimed to have swapped $200 million in trade receivables for a promissory note with its “primary customer in Venezuela.”

Later, the company stated to have “recorded the note at its fair market value at the date of exchange, resulting in a $148 million pre-tax loss.” The only firms legally permitted to work in Venezuelan oil fields are PDVSA and its affiliates.

In the company’s 2015 earnings report, PDVSA said that it had distributed $831 million in promissory notes to pay off debt to providers. The notes pay 6.5 percent interest and are set to mature in 2019.

Analysts say that issues with payment to providers are attached to reports of waning production at PDVSA. However, del Pino has renounced these reports and maintains that all debts are currently reaching resolution.

Friday, July 22, 2016

Master Plan, Part Deux (Elon Musk)

The first master plan that I wrote 10 years ago is now in the final stages of completion. It wasn't all that complicated and basically consisted of:
  1. Create a low volume car, which would necessarily be expensive
  2. Use that money to develop a medium volume car at a lower price
  3. Use that money to create an affordable, high volume car
  4. Provide solar power. No kidding, this has literally been on our website for 10 years.
The reason we had to start off with step 1 was that it was all I could afford to do with what I made from PayPal. I thought our chances of success were so low that I didn't want to risk anyone's funds in the beginning but my own. The list of successful car company startups is short. As of 2016, the number of American car companies that haven't gone bankrupt is a grand total of two: Ford and Tesla. Starting a car company is idiotic and an electric car company is idiocy squared.
Also, a low volume car means a much smaller, simpler factory, albeit with most things done by hand. Without economies of scale, anything we built would be expensive, whether it was an economy sedan or a sports car. While at least some people would be prepared to pay a high price for a sports car, no one was going to pay $100k for an electric Honda Civic, no matter how cool it looked.
Part of the reason I wrote the first master plan was to defend against the inevitable attacks Tesla would face accusing us of just caring about making cars for rich people, implying that we felt there was a shortage of sports car companies or some other bizarre rationale. Unfortunately, the blog didn't stop countless attack articles on exactly these grounds, so it pretty much completely failed that objective.
However, the main reason was to explain how our actions fit into a larger picture, so that they would seem less random. The point of all this was, and remains, accelerating the advent of sustainable energy, so that we can imagine far into the future and life is still good. That's what "sustainable" means. It's not some silly, hippy thing -- it matters for everyone.
By definition, we must at some point achieve a sustainable energy economy or we will run out of fossil fuels to burn and civilization will collapse. Given that we must get off fossil fuels anyway and that virtually all scientists agree that dramatically increasing atmospheric and oceanic carbon levels is insane, the faster we achieve sustainability, the better.
Here is what we plan to do to make that day come sooner:
Integrate Energy Generation and Storage

Create a smoothly integrated and beautiful solar-roof-with-battery product that just works, empowering the individual as their own utility, and then scale that throughout the world. One ordering experience, one installation, one service contact, one phone app.

We can't do this well if Tesla and SolarCity are different companies, which is why we need to combine and break down the barriers inherent to being separate companies. That they are separate at all, despite similar origins and pursuit of the same overarching goal of sustainable energy, is largely an accident of history. Now that Tesla is ready to scale Powerwall and SolarCity is ready to provide highly differentiated solar, the time has come to bring them together.
Expand to Cover the Major Forms of Terrestrial Transport

Today, Tesla addresses two relatively small segments of premium sedans and SUVs. With the Model 3, a future compact SUV and a new kind of pickup truck, we plan to address most of the consumer market. A lower cost vehicle than the Model 3 is unlikely to be necessary, because of the third part of the plan described below.

What really matters to accelerate a sustainable future is being able to scale up production volume as quickly as possible. That is why Tesla engineering has transitioned to focus heavily on designing the machine that makes the machine -- turning the factory itself into a product. A first principles physics analysis of automotive production suggests that somewhere between a 5 to 10 fold improvement is achievable by version 3 on a roughly 2 year iteration cycle. The first Model 3 factory machine should be thought of as version 0.5, with version 1.0 probably in 2018.
In addition to consumer vehicles, there are two other types of electric vehicle needed: heavy-duty trucks and high passenger-density urban transport. Both are in the early stages of development at Tesla and should be ready for unveiling next year. We believe the Tesla Semi will deliver a substantial reduction in the cost of cargo transport, while increasing safety and making it really fun to operate.
With the advent of autonomy, it will probably make sense to shrink the size of buses and transition the role of bus driver to that of fleet manager. Traffic congestion would improve due to increased passenger areal density by eliminating the center aisle and putting seats where there are currently entryways, and matching acceleration and braking to other vehicles, thus avoiding the inertial impedance to smooth traffic flow of traditional heavy buses. It would also take people all the way to their destination. Fixed summon buttons at existing bus stops would serve those who don't have a phone. Design accommodates wheelchairs, strollers and bikes.

As the technology matures, all Tesla vehicles will have the hardware necessary to be fully self-driving with fail-operational capability, meaning that any given system in the car could break and your car will still drive itself safely. It is important to emphasize that refinement and validation of the software will take much longer than putting in place the cameras, radar, sonar and computing hardware.

Even once the software is highly refined and far better than the average human driver, there will still be a significant time gap, varying widely by jurisdiction, before true self-driving is approved by regulators. We expect that worldwide regulatory approval will require something on the order of 6 billion miles (10 billion km). Current fleet learning is happening at just over 3 million miles (5 million km) per day.
I should add a note here to explain why Tesla is deploying partial autonomy now, rather than waiting until some point in the future. The most important reason is that, when used correctly, it is already significantly safer than a person driving by themselves and it would therefore be morally reprehensible to delay release simply for fear of bad press or some mercantile calculation of legal liability.
According to the recently released 2015 NHTSA report, automotive fatalities increased by 8% to one death every 89 million miles. Autopilot miles will soon exceed twice that number and the system gets better every day. It would no more make sense to disable Tesla's Autopilot, as some have called for, than it would to disable autopilot in aircraft, after which our system is named.
It is also important to explain why we refer to Autopilot as "beta". This is not beta software in any normal sense of the word. Every release goes through extensive internal validation before it reaches any customers. It is called beta in order to decrease complacency and indicate that it will continue to improve (Autopilot is always off by default). Once we get to the point where Autopilot is approximately 10 times safer than the US vehicle average, the beta label will be removed.

When true self-driving is approved by regulators, it will mean that you will be able to summon your Tesla from pretty much anywhere. Once it picks you up, you will be able to sleep, read or do anything else enroute to your destination.

You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you're at work or on vacation, significantly offsetting and at times potentially exceeding the monthly loan or lease cost. This dramatically lowers the true cost of ownership to the point where almost anyone could own a Tesla. Since most cars are only in use by their owner for 5% to 10% of the day, the fundamental economic utility of a true self-driving car is likely to be several times that of a car which is not.
In cities where demand exceeds the supply of customer-owned cars, Tesla will operate its own fleet, ensuring you can always hail a ride from us no matter where you are.
So, in short, Master Plan, Part Deux is:
Create stunning solar roofs with seamlessly integrated battery storage

Expand the electric vehicle product line to address all major segments

Develop a self-driving capability that is 10X safer than manual via massive fleet learning

Enable your car to make money for you when you aren't using it

Thursday, July 21, 2016

Online shopping: Tips to keep close to your wallet

Online shopping: Tips to keep close to your wallet

Online shopping makes it easy and convenient to search for — and buy — the must-have items on your wish list. Before you buy, check out this video for tips on avoiding hassles, getting the right product at the right price, and protecting your financial information:
Online Shopping Tips
Looking for a great product at a great price? These tips can help.
Here’s a quick cheat sheet:

  • Confirm that the seller is legit. Look for reviews about their reputation and customer service, and be sure you can contact the seller if you have a dispute.

  • Pay by credit card to ensure added protections, and never mail cash or wire money to online sellers.

  • Keep records of online transactions until you get the goods.

Want to know more? Check out more ways to ensure hassle-free online shopping.

Wednesday, July 20, 2016

This Guy Trains Computers to Find Future Criminals (BW)

Richard Berk says his algorithms take the bias out of criminal justice. But could they make it worse? 
by Joshua Brustein July 18, 2016

When historians look back at the turmoil over prejudice and policing in the U.S. over the past few years, they’re unlikely to dwell on the case of Eric Loomis. Police in La Crosse, Wis., arrested Loomis in February 2013 for driving a car that was used in a drive-by shooting. He had been arrested a dozen times before. Loomis took a plea, and was sentenced to six years in prison plus five years of probation.

The episode was unremarkable compared with the deaths of Philando Castile and Alton Sterling at the hands of police, which were captured on camera and distributed widely online. But Loomis’s story marks an important point in a quieter debate over the role of fairness and technology in policing. Before his sentence, the judge in the case received an automatically generated risk score that determined Loomis was likely to commit violent crimes in the future.

Risk scores, generated by algorithms, are an increasingly common factor in sentencing. Computers crunch data—arrests, type of crime committed, and demographic information—and a risk rating is generated. The idea is to create a guide that’s less likely to be subject to unconscious biases, the mood of a judge, or other human shortcomings. Similar tools are used to decide which blocks police officers should patrol, where to put inmates in prison, and who to let out on parole. Supporters of these tools claim they’ll help solve historical inequities, but their critics say they have the potential to aggravate them, by hiding old prejudices under the veneer of computerized precision. Some people see them as a sterilized version of what brought protesters into the streets at Black Lives Matter rallies.

Loomis is a surprising fulcrum in this controversy: He’s a white man. But when Loomis challenged the state’s use of a risk score in his sentence, he cited many of the fundamental criticisms of the tools: that they’re too mysterious to be used in court, that they punish people for the crimes of others, and that they hold your demographics against you. Last week the Wisconsin Supreme Court ruled against Loomis, but the decision validated some of his core claims. The case, say legal experts, could serve as a jumping-off point for legal challenges questioning the constitutionality of these kinds of techniques.

To understand the algorithms being used all over the country, it’s good to talk to Richard Berk. He’s been writing them for decades (though he didn’t write the tool that created Loomis’s risk score). Berk, a professor at the University of Pennsylvania, is a shortish, bald guy, whose solid stature and I-dare-you-to-disagree-with-me demeanor might lead people to mistake him for an ex-cop. In fact, he’s a career statistician.

His tools have been used by prisons to determine which inmates to place in restrictive settings; parole departments to choose how closely to supervise people being released from prison; and police officers to predict whether people arrested for domestic violence will re-offend. He once created an algorithm that would tell the Occupational Safety and Health Administration which workplaces were likely to commit safety violations, but says the agency never used it for anything. Starting this fall, the state of Pennsylvania plans to run a pilot program using Berk’s system in sentencing decisions.

As his work has been put into use across the country, Berk’s academic pursuits have become progressively fantastical. He’s currently working on an algorithm that he says will be able to predict at the time of someone’s birth how likely she is to commit a crime by the time she turns 18. The only limit to applications like this, in Berk’s mind, is the data he can find to feed into them.

“The policy position that is taken is that it’s much more dangerous to release Darth Vader than it is to incarcerate Luke Skywalker”

This kind of talk makes people uncomfortable, something Berk was clearly aware of on a sunny Thursday morning in May as he headed into a conference in the basement of a campus building at Penn to play the role of least popular man in the room. He was scheduled to participate in the first panel of the day, which was essentially a referendum on his work. Berk settled into his chair and prepared for a spirited debate about whether what he does all day is good for society.

The moderator, a researcher named Sandra Mayson, took the podium. “This panel is the Minority Report panel,” she said, referring to the Tom Cruise movie where the government employs a trio of psychic mutants to identify future murderers, then arrests these “pre-criminals” before their offenses occur. The comparison is so common it’s become a kind of joke. “I use it too, occasionally, because there’s no way to avoid it," Berk said later.

For the next hour, the other members of the panel took turns questioning the scientific integrity, utility, and basic fairness of predictive techniques such as Berk’s. As it went on, he began to fidget in frustration. Berk leaned all the way back in his chair and crossed his hands over his stomach. He leaned all the way forward and flexed his fingers. He scribbled a few notes. He rested his chin in one hand like a bored teenager and stared off into space.

Eventually, the debate was too much for him: “Here’s what I, maybe hyperbolically, get out of this,” Berk said. “No data are any good, the criminal justice system sucks, and all the actors in the criminal justice system are biased by race and gender. If that’s the takeaway message, we might as well all go home. There’s nothing more to do.” The room tittered with awkward laughter.

Berk’s work on crime started in the late 1960s, when he was splitting his time between grad school and a social work job in Baltimore. The city exploded in violence following the assassination of Martin Luther King Jr. Berk’s graduate school thesis examined the looting patterns during the riots. “You couldn’t really be alive and sentient at that moment in time and not be concerned about what was going on in crime and justice,” he said. “Very much like today with the Ferguson stuff.”

In the mid-1990s, Berk began focusing on machine learning, where computers look for patterns in data sets too large for humans to sift through manually. To make a model, Berk inputs tens of thousands of profiles into a computer. Each one includes the data of someone who has been arrested, including how old they were when first arrested, what neighborhood they’re from, how long they’ve spent in jail, and so on. The data also contain information about who was re-arrested. The computer finds patterns, and those serve as the basis for predictions about which arrestees will re-offend.

To Berk, a big advantage of machine learning is that it eliminates the need to understand what causes someone to be violent. “For these problems, we don’t have good theory,” he said. Feed the computer enough data and it can figure it out on its own, without deciding on a philosophy of the origins of criminal proclivity. This is a seductive idea. But it’s also one that comes under criticism each time a supposedly neutral algorithm in any field produces worryingly non-neutral results. In one widely cited study, researchers showed that Google’s automated ad-serving software was more likely to show ads for high-paying jobs to men than to women. Another found that ads for arrest records show up more often when searching the web for distinctly black names than for white ones.

Computer scientists have a maxim, “Garbage in, garbage out.” In this case, the garbage would be decades of racial and socioeconomic disparities in the criminal justice system. Predictions about future crimes based on data about historical crime statistics have the potential to equate past patterns of policing with the predisposition of people in certain groups—mostly poor and nonwhite—to commit crimes.

Berk readily acknowledges this as a concern, then quickly dismisses it. Race isn’t an input in any of his systems, and he says his own research has shown his algorithms produce similar risk scores regardless of race. He also argues that the tools he creates aren’t used for punishment—more often they’re used, he said, to reverse long-running patterns of overly harsh sentencing, by identifying people whom judges and probation officers shouldn’t worry about.

Berk began working with Philadelphia’s Adult Probation and Parole Department in 2006. At the time, the city had a big murder problem and a small budget. There were a lot of people in the city’s probation and parole programs. City Hall wanted to know which people it truly needed to watch. Berk and a small team of researchers from the University of Pennsylvania wrote a model to identify which people were most likely to commit murder or attempted murder while on probation or parole. Berk generally works for free, and was never on Philadelphia’s payroll.

A common question, of course, is how accurate risk scores are. Berk says that in his own work, between 29 percent and 38 percent of predictions about whether someone is low-risk end up being wrong. But focusing on accuracy misses the point, he says. When it comes to crime, sometimes the best answers aren’t the most statistically precise ones. Just like weathermen err on the side of predicting rain because no one wants to get caught without an umbrella, court systems want technology that intentionally overpredicts the risk that any individual is a crime risk. The same person could end up being described as either high-risk or not depending on where the government decides to set that line. “The policy position that is taken is that it’s much more dangerous to release Darth Vader than it is to incarcerate Luke Skywalker,” Berk said.

“Every mark of poverty serves as a risk factor”

Philadelphia’s plan was to offer cognitive behavioral therapy to the highest-risk people, and offset the costs by spending less money supervising everyone else. When Berk posed the Darth Vader question, the parole department initially determined it’d be 10 times worse, according to Geoffrey Barnes, who worked on the project. Berk figured that at that threshold the algorithm would name 8,000 to 9,000 people as potential pre-murderers. Officials realized they couldn’t afford to pay for that much therapy, and asked for a model that was less harsh. Berk’s team twisted the dials accordingly. “We’re intentionally making the model less accurate, but trying to make sure it produces the right kind of error when it does,” Barnes said.

The program later expanded to group everyone into high-, medium-, and low-risk populations, and the city significantly reduced how closely it watched parolees Berk’s system identified as low-risk. In a 2010 study, Berk and city officials reported that people who were given more lenient treatment were less likely to be arrested for violent crimes than people with similar risk scores who stayed with traditional parole or probation. People classified as high-risk were almost four times more likely to be charged with violent crimes.

Since then, Berk has created similar programs in Maryland’s and Pennsylvania’s statewide parole systems. In Pennsylvania, an internal analysis showed that between 2011 and 2014 about 15 percent of people who came up for parole received different decisions because of their risk scores. Those who were released during that period were significantly less likely to be re-arrested than those who had been released in years past. The conclusion: Berk’s software was helping the state make smarter decisions.

Laura Treaster, a spokeswoman for the state’s Board of Probation and Parole, says Pennsylvania isn’t sure how its risk scores are impacted by race. “This has not been analyzed yet,” she said. “However, it needs to be noted that parole is very different than sentencing. The board is not determining guilt or innocence. We are looking at risk.”

Sentencing, though, is the next frontier for Berk’s risk scores. And using algorithms to decide how long someone goes to jail is proving more controversial than using them to decide when to let people out early.

Wisconsin courts use Compas, a popular commercial tool made by a Michigan-based company called Northpointe. By the company’s account, the people it deems high-risk are re-arrested within two years in about 70 percent of cases. Part of Loomis’s challenge was specific to Northpointe’s practice of declining to share specific information about how its tool generates scores, citing competitive reasons. Not allowing a defendant to assess the evidence against him violated due process, he argued. (Berk shares the code for his systems, and criticizes commercial products such as Northpointe’s for not doing the same.)

As the court was considering Loomis’s appeal, the journalism website ProPublica published an investigation looking at 7,000 Compas risk scores in a single county in Florida over the course of 2013 and 2014. It found that black people were almost twice as likely as white people to be labeled high-risk, then not commit a crime, while it was much more common for white people who were labeled low-risk to re-offend than black people who received a low-risk score. Northpointe challenged the findings, saying ProPublica had miscategorized many risk scores and ignored results that didn’t support its thesis. Its analysis of the same data found no racial disparities.

Even as it upheld Loomis’s sentence, the Wisconsin Supreme Court cited the research on race to raise concerns about the use of tools like Compas. Going forward, it requires risk scores to be accompanied by disclaimers about their nontransparent nature and various caveats about their conclusions. It also says they can’t be used as the determining factor in a sentencing decision. The decision was the first time that such a high court had signaled ambivalence about the use of risk scores in sentencing.

Sonja Starr, a professor at the University of Michigan’s law school and a prominent critic of risk assessment, thinks that Loomis’s case foreshadows stronger legal arguments to come. Loomis made a demographic argument, saying that Compas rated him as riskier because of his gender, reflecting the historical patterns of men being arrested at higher rates than women. But he didn’t frame it as an argument that Compas violated the Equal Protection Clause of the 14th Amendment, which allowed the court to sidestep the core issue.

Loomis also didn’t argue that the risk scores serve to discriminate against poor people. “That’s the part that seems to concern judges, that every mark of poverty serves as a risk factor,” Starr said. “We should very easily see more successful challenges in other cases.”

Officials in Pennsylvania, which has been slowly preparing to use risk assessment in sentencing for the past six years, are sensitive to these potential pitfalls. The state’s experience shows how tricky it is to create an algorithm through the public policy process. To come up with a politically palatable risk tool, Pennsylvania established a sentencing commission. It quickly rejected commercial products like Compas, saying they were too expensive and too mysterious, so the commission began creating its own system.

“If you want me to do a totally race-neutral forecast, you’ve got to tell me what variables you’re going to allow me to use, and nobody can, because everything is confounded with race and gender”

Race was discarded immediately as an input. But every other factor became a matter of debate. When the state initially wanted to include location, which it determined to be statistically useful in predicting who would re-offend, the Pennsylvania Association of Criminal Defense Lawyers argued that it was a proxy for race, given patterns of housing segregation. The commission eventually dropped the use of location. Also in question: the system’s use of arrests, instead of convictions, since it seems to punish people who live in communities that are policed more aggressively.

Berk argues that eliminating sensitive factors weakens the predictive power of the algorithms. “If you want me to do a totally race-neutral forecast, you’ve got to tell me what variables you’re going to allow me to use, and nobody can, because everything is confounded with race and gender,” he said.

Starr says this argument confuses the differing standards in academic research and the legal system. In social science, it can be useful to calculate the relative likelihood that members of certain groups will do certain things. But that doesn’t mean a specific person’s future should be calculated based on an analysis of populationwide crime stats, especially when the data set being used reflects decades of racial and socioeconomic disparities. It amounts to a computerized version of racial profiling, Starr argued. “If the variables aren’t appropriate, you shouldn’t be relying on them," she said.

Late this spring, Berk traveled to Norway to meet with a group of researchers from the University of Oslo. The Norwegian government gathers an immense amount of information about the country’s citizens and connects each of them to a single identification file, presenting a tantalizing set of potential inputs.

Torbjørn Skardhamar, a professor at the university, was interested in exploring how he could use machine learning to make long-term predictions. He helped set up Berk’s visit. Norway has lagged behind the U.S. in using predictive analytics in criminal justice, and the men threw around a few ideas.

Berk wants to predict at the moment of birth whether people will commit a crime by their 18th birthday, based on factors such as environment and the history of a new child’s parents. This would be almost impossible in the U.S., given that much of a person’s biographical information is spread out across many agencies and subject to many restrictions. He’s not sure if it’s possible in Norway, either, and he acknowledges he also hasn’t completely thought through how best to use such information.

Caveats aside, this has the potential to be a capstone project of Berk’s career. It also takes all of the ethical and political questions and extends them to their logical conclusion. Even in the movie Minority Report, the government peered only hours into the future—not years. Skardhamar, who is new to these techniques, said he’s not afraid of making mistakes: They’re talking about them now, he said, so they can avoid future errors. “These are tricky questions,” he said, mulling all the ways the project could go wrong. “Making them explicit—that’s a good thing.”