How future policy and regulations will challenge AI

http://ift.tt/2nx101H

Paul-Nicholas_avatar_1410983893-128x128.

I recently wrote about how radical the incorporation of artificial intelligence (AI) to cybersecurity will be. Technological revolutions are however frequently not as rapid as we think. We tend to see specific moments, from Sputnik in 1957 to the iPhone in 2007, and call them “game changing” – without appreciating the intervening stages of innovation, implementation and regulation, which ultimately result in that breakthrough moment. What can we therefore expect from this iterative and less eye-catching part of AI’s development, looking not just at the technological progress, but its interaction with national policy-making process?

I can see two overlapping, but distinct, perspectives. The first relates to the reality that information and communication technology (ICT) and its applications develop faster than laws. In recent years, examples of social media and/or ride hailing apps have seen this translate into the following regulatory experience:

  1. Innovation: R&D processes arrive at one or many practical options for a technology;
  2. Implementation: These options are applied in the real world, are refined through experience, and begin to spread through major global markets;
  3. Regulation: Governments intervene to defend the status quo or to respond to new categories of problem, e.g. cross-border data flows;
  4. Unanticipated consequences: Policy and technology’s interaction inadvertently harms one or both, e.g. the Wassenaar’s impact on cybersecurity R&D.

AI could follow a similar path. However, unlike e-commerce or the shared economy (but like nanotechnology or genetic engineering) AI actively scares people, so early regulatory interventions are likely. For example, a limited focus on using AI in certain sectors, e.g. defense or pharmaceuticals, might be positioned as more easily managed and controlled than AI’s general application. However, could such a limit really be imposed, particularly in the light of potential for transformative creative leaps that AI seems to promise? I say that would be unlikely – resulting in yet more controls. Leaving aside the fourth stage of unknown unknowns of unanticipated consequences, the third phase, i.e. regulation, would almost inevitably run into trouble of its own by virtue to having to legally define something as unprecedented and mutable as AI. It seems to me, therefore, that even the basic phases of AI’s interaction with regulation could be fraught with problems for innovators, implementers and regulators.

The second, more AI-specific perspective is driven by the way its capabilities will emerge, which I feel will break down into three basic stages:

  1. Distinction: Creation of smarter sensors;
  2. Direction: Automation of human-initiated decision-making;
  3. Delegation: Enablement of entirely independent decision-making.

Smarter sensors will come in various forms, not least as part of the Internet of Things (IoT), and their aggregated data will have implications for privacy. 20th century “dumb lenses” are already being connected to systems that can pick out number plates or human faces but truly smart sensors could know almost anything about us, from what is in our fridge and on our grocery list, to where we are going and whom we will meet. It is this aggregated, networked aspect of smarter sensors that will be at the core of the first AI challenge for policy-makers. As they become discriminating enough to anticipate what we might do next, e.g. in order to offer us useful information ahead of time, they create an inadvertent panopticon that the unscrupulous and actively criminal can exploit.

Moving past this challenge, AI will become able to support and enhance human decision-making. Human input will still be essential but it might be as limited as a “go/no go” on an AI-generated proposal. From a legal perspective, mens rea or scope of liability might not be wholly thrown into confusion, as a human decision-maker remains. Narrow applications in certain highly technical areas, e.g. medicine or engineering, might be practical but day-to-day users could be flummoxed if every choice had unreadable but legally essential Terms & Conditions. The policy-making response may be to use tort/liability law, obligatory insurance for AI providers/users, or new risk management systems to hedge the downside of AI-enhanced decision-making without losing the full utility of the technology.

Once decision-making is possible without human input, we begin to enter the realm of speculation.  However, it is important to remember that there are already high-frequency trading (HFT) systems in financial markets that operate independent of direct human oversight, following algorithmic instructions. The suggested linkages between “flash crash” events and HFT highlight, nonetheless, the problems policy-makers and regulators will face. It may be hard to foresee what even a “limited” AI might do in certain circumstances, and the ex-ante legal liability controls mentioned above may seem insufficient to policy-makers should a system get out of control, either in the narrow sense of being out of the control of those people legally responsible for it, or in the general sense of it being out of control of anybody.

These three stages would suggest significant challenges for policy-makers, with existing legal processes losing their applicability as AI moves further away from direct human responsibility. The law is, however adaptable, and solutions could emerge. In extremis we might, for example, be willing to add to the concept of “corporate persons” with a concept of “artificial persons”. Would any of us feel safer if we could assign legal liability to the AIs themselves and then sue them as we do corporations and businesses? Maybe.

In summary then, the true challenges for AI’s development may not exist solely in the big ticket moments of beating chess masters or passing Turing Tests. Instead, there will be any number of roadblocks caused by the needs of regulatory and policy processes systems still rooted in the 19th and 20th centuries. And, odd though this may sound from a technologist like me, that delay might be a good thing, given the potential transformative power of AI.

 

April 25, 2017 at 09:45PM

http://ift.tt/2q306Mx

from Paul Nicholas

http://ift.tt/2q306Mx

4 steps to managing shadow IT

http://ift.tt/eA8V8J

Shadow IT is on the rise. More than 80 percent of employees report using apps that weren’t sanctioned by IT. Shadow IT includes any unapproved hardware or software, but SaaS is the primary cause in its rapid rise. Today, attempting to block it is an outdated, ineffective approach. Employees find ways around IT controls.

How can you empower your employees and still maintain visibility and protection? Here are four steps to help you manage SaaS apps and shadow IT.

Step 1: Find out what people are actually using

The first step is to get a detailed picture of how employees use the cloud. Which applications are they using? What data is uploaded and downloaded? Who are the top users? Is a particular app too risky? These insights provide information that can help you develop a strategy for cloud app use in your organization, as well as indicate whether an account has been compromised or a worker is taking unauthorized actions.

Step 2: Control data through granular policies

Once you have comprehensive visibility and understanding of the apps your organization uses, you can begin to monitor users’ activities and implement custom policies tailored to your organization’s security needs. Policies like restricting certain data types or alerts for unexpectedly high rates of an activity. You can take actions when there are violations against your policy. For instance, you can take a public link and make it private or create a user quarantine.

Step 3: Protect your data at the file level

Protecting data at the file level is especially important when data is accessed via unknown applications. Data loss prevention (DLP) policies can help ensure that employees don’t accidentally send sensitive information, such as personally identifiable information (PII) data, credit card numbers, and financial results outside of your corporate network. Today, there are solutions that help make that even easier.

Step 4: Use behavioral analytics to protect apps and data

Through machine learning and behavioral analytics, innovative threat detection technologies analyze how each user interacts with the SaaS applications and assess the risks through deep analysis. This helps you to identify anomalies that may indicate a data breach, such as simultaneous logons from two countries, the sudden download of terabytes of data, or multiple failed-logon attempts that may signify a brute force attack.

Where can you start?

Consider a Cloud Access Security Broker (CASB). These solutions are designed to help you achieve each of these steps in a simple, manageable way. They provide deeper visibility, comprehensive controls, and improved protection for the cloud applications your employees use—sanctioned or unsanctioned.

To learn why CASBs are becoming a necessity, read our new e-book. It outlines the common issues surrounding shadow IT and how a CASB can be a helpful tool in your enterprise security strategy.

Read Bring Shadow IT into the Light.

 

April 24, 2017 at 09:43PM

http://ift.tt/2oYjWXJ

from Microsoft Secure Blog Staff

http://ift.tt/2oYjWXJ

Russian Hacker Sentenced to 27 Years in Credit Card Case

http://ift.tt/eA8V8J
The schemes of Roman Seleznev led to the theft and resale of more than two million credit card numbers, resulting in losses of at least $170 million.

April 22, 2017 at 12:02AM

http://ift.tt/2oScygJ

from By NICOLE PERLROTH

http://ift.tt/2oScygJ

Russian Hacker Sentenced to 27 Years in Credit Card Case

http://ift.tt/eA8V8J
The schemes of Roman Seleznev led to the theft and resale of more than two million credit card numbers, resulting in losses of at least $170 million.

April 22, 2017 at 12:02AM

http://ift.tt/2oScygJ

from By NICOLE PERLROTH

http://ift.tt/2oScygJ

Navigating cybersecurity in the New Age

http://ift.tt/2o83JjR

In today’s rapidly evolving tech landscape, tools, gadgets, and platforms aren’t the only things advancing. Cyberattacks are becoming more powerful, wide-ranging, and harmful to organizations around the globe.

For any enterprise, cybersecurity is one of the most essential factors to business success. With new and emerging technology, leaders have to explore modern security needs via stronger, more intelligent solutions. Today, the modern security officers must:

  • Recognize the intricacies of the cyberspace and the cyberattacks that threaten it
  • Take advantage of machine learning and cloud platforms that enhance security
  • Gain insights to top trends and the future of the cybersecurity industry

Navigating today’s advanced cyber threats is a team effort. Organizations must learn new skills to protect themselves from cyber criminals and ensure infrastructure security. It takes a team of security experts, analysts, IT specialists, and risk assessors to restructure and refine cybersecurity.

On May 10th, Microsoft will live stream from the Security Summit, an invitation-only event for Chief Information Security Officers.  Attend the live, Virtual Security Summit to hear from leading security experts about best practices and solutions to keep your organization safe.

Don’t miss out on the opportunity to gain insights and learn how to protect your organization, detect, and respond to evolving cyberattacks.

 

 

April 20, 2017 at 09:19PM

http://ift.tt/2oqEhSU

from Microsoft Secure Blog Staff

http://ift.tt/2oqEhSU