Thursday, February 28, 2019

Frontier: Losing Customers While Raising Prices; Company Loses $643 Million in 2018

In the last three months of 2018, Frontier Communications reported it said goodbye to 67,000 broadband customers, lost $643 million in revenue year-over-year, and had to write down the value of its assets and business by $241 million, as the company struggles with a deteriorating copper wire network in many states where it operates.

But Wall Street was pleased the company’s latest quarterly results were not worse, and helped lift Frontier’s stock from $2.42 to $2.96 this afternoon, still down considerably from the $125 a share price the company commanded just four years ago.

Frontier’s fourth quarter 2018 financial results arrived the same week Windstream, another independent telephone company, declared Chapter 11 bankruptcy reorganization. Life is rough for the nation’s legacy telephone companies, especially those that have continued to depend on copper wire infrastructure that, in some cases, was attached to poles during the Johnson or Nixon Administrations.

Frontier Communications CEO Dan McCarthy is the telephone company’s version of Sears’ former CEO Edward Lampert. Perpetually optimistic, McCarthy has been embarked on a long-term ‘transformation’ strategy at Frontier, to wring additional profit out of the business that provides service to customers in 29 states. Much of that effort has been focused on cost-cutting measures, including layoffs of 1,560 workers last year, a sale of wireless towers, and various plans to make business operations more efficient, delivering mixed results.

McCarthy

Frontier’s efforts to improve customer service have been hampered by the quality and pricing of its services, which can bring complaints from customers, many who eventually depart. Frontier’s overall health continues to decline, financially gaining mostly through rate increases and new hidden fees and surcharges. In fact, much of Frontier’s latest revenue improvements come almost entirely from charging customers more for the same service.

McCarthy calls it ‘cost recovery’ and ‘steady-state pricing.’

“One of the things that we’ve been focused on really for the better part of two years is …. taking advantage of pricing opportunities [and] recovering content costs — really dealing with customers moving from promotional pricing to steady-state pricing, and then offering different opportunities for customers both from a speed and package perspective,” McCarthy said Tuesday. “The quarter really was about us targeting customers very selectively and really trying to improve customer lifetime value.”

By “selectively,” McCarthy means being willing to let promotion-seeking customers go and being less amenable to customers trying to negotiate for a lower bill. The result, so far, is 103,000 service disconnects over the past three months and 379,000 fewer customers over the past year. A good number of those customers were subscribed to Frontier FiOS fiber to the home service, but still left for a cable company or competing fiber provider, often because Frontier kept raising their bill.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Cisco warns a critical patch is needed for a remote access firewall, VPN and router

Cisco is warning organizations with remote users that have deployed a particular Cisco wireless firewall, VPN and router to patch a critical vulnerability in each that could let attackers break into the network.

The vulnerability, which has an impact rating of 9.8 out of 10 on the Common Vulnerability Scoring System lets a potential attacker send malicious HTTP requests to a targeted device. A successful exploit could let the attacker execute arbitrary code on the underlying operating system of the affected device as a high-privilege user, Cisco stated.

The vulnerability is in the web-based management interface of three products: Cisco’s RV110W Wireless-N VPN Firewall, RV130W Wireless-N Multifunction VPN Router and RV215W Wireless-N VPN Router. All three products are positioned as remote-access communications and security devices.

The web-based management interface of these devices is available through a local LAN connection or the remote-management feature and by default, the remote management feature is disabled for these devices, Cisco said in its Security Advisory.

It said administrators can determine whether the remote-management feature is enabled for a device, by opening the web-based management interface and choose “Basic Settings > Remote Management.” If the “Enable” box is checked, remote management is enabled for the device.

The vulnerability is due to improper validation of user-supplied data in the web-based management interface, Cisco said.

Cisco has released software updates that address this vulnerability and customers should check their software license agreement for more details. 

Cisco warned of other developing security problems this week.

Elasticsearch

Cisco’s Talos security researchers warned that users need to keep a close eye on unsecured Elasticsearch clusters. Elasticsearch is an open-source distributed search and analytics engine built on Apache Lucene. 

“We have recently observed a spike in attacks from multiple threat actors targeting these clusters,” Talos stated.  In a post, Talos wrote that attackers are targeting clusters using versions 1.4.2 and lower, and are leveraging old vulnerabilities to pass scripts to search queries and drop the attacker’s payloads. These scripts are being leveraged to drop both malware and cryptocurrency-miners on victim machines.

Talos also wrote that it has identified social-media accounts associated with one of these threat actors. “Because Elasticsearch is typically used to manage very large datasets, the repercussions of a successful attack on a cluster could be devastating due to the amount of data present. This post details the attack methods used by each threat actor, as well as the associated payloads,” Cisco wrote.

Docker and Kubernetes

Cisco continues to watch a run-time security issue with Docker and Kubernetes containers. “The vulnerability exists because the affected software improperly handles file descriptors related to /proc/self/exe. An attacker could exploit the vulnerability either by persuading a user to create a new container using an attacker-controlled image or by using the docker exec command to attach into an existing container that the attacker already has write access to,” Cisco wrote.

“A successful exploit could allow the attacker to overwrite the host's runc binary file with a malicious file, escape the container, and execute arbitrary commands with root privileges on the host system,” Cisco stated.  So far Cisco has identified only three of its products as susceptible to the vulnerability: Cisco Container Platform, Cloudlock and Defense Orchestrator.  It is evaluating other products, such as the widely used IOS XE Software package.

Webex

Cisco issued a third patch-of-a-patch for its Webex system. Specifically Cisco said in an advisory that a vulnerability in the update service of Cisco Webex Meetings Desktop App and Cisco Webex Productivity Tools for Windows could allow an authenticated, local attacker to execute arbitrary commands as a privileged user. The company issued patches to address the problem in October and November, but the issue persisted.

“The vulnerability is due to insufficient validation of user-supplied parameters. An attacker could exploit this vulnerability by invoking the update service command with a crafted argument. An exploit could allow the attacker to run arbitrary commands with SYSTEM user privileges,” Cisco stated. 

The vulnerability affects all Cisco Webex Meetings Desktop App releases prior to 33.6.6, and Cisco Webex Productivity Tools Releases 32.6.0 and later prior to 33.0.7, when running on a Microsoft Windows end-user system.

Details on how to address this patch are here.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

New chemistry-based data storage would blow Moore’s Law out of the water

Molecular electronics, where charges move through tiny, sole molecules, could be the future of computing and, in particular, storage, some scientists say.

Researchers at Arizona State University (ASU) point out that a molecule-level computing technique, if its development succeeds, would slam Gordon Moore’s 1965 prophesy — Moore's Law — that the number of transistors on a chip will double every year, and thus allow electronics to get proportionally smaller. In this case, hardware, including transistors, will conceivably fit on individual molecules, reducing chip sizes much more significantly than Moore ever envisaged.

“The intersection of physical and chemical properties occurring at the molecular scale” is now being explored, and shows promise, an ASU article says. The researchers think Moore’s miniaturization projections will be blown out of the water.

Ultra-miniaturization, using chemistry and its molecules and atoms, has been on the scientific community radar for a while. However, it’s been rocky—temperature has been a problem, among other things.

One big issue, which may be about to be solved, is related to controlling flowing electrons. The flowing current, acting like a wave, gets interfered with—a bit like a water wave. The trouble is called quantum interference and is an area in which the researchers claim to be making progress.

Researchers want to get a handle on “not only measuring quantum phenomena in single molecules, but also controlling them,” says Nongjian "NJ" Tao, director of the ASU's Biodesign Center for Bioelectronics and Biosensors, in the article.

He says that by figuring the charge-transport properties better, they’ll be able to develop the new, ultra-tiny electronics devices. If successful, data storage equipment and the general processing of information could end up operating through high-speed, high-power molecular switches. Transistors and rectifiers could also become molecular scale. Miniaturization-limiting silicon could be replaced.

“A single organic molecule suspended between a pair of electrodes as a current is passed through the tiny structure” is the foundation for the experiments, the school explains. A system called electrochemical gating, where conductance is controlled is then used. It manages the interference and is related to how “waves in water can combine to form a larger wave or cancel one another out, depending on their phase.” Through this science, the researchers say they’ve been able to, for the first time ever, fine-tune conductance in a molecule. That's a big step. Capacitance is the storing of electrical charge.

Other chemistry-related data storage research

I’ve written before about chemistry “superseding traditional engineering” in shrinking data storage. Last year, unrelated to this ASU and others’ quantum interference project, Brown University said it was working on ways to store terabytes of data chemically in a flask of liquid.

“Synthetic molecule storage in liquids could one day replace hard drives,” I wrote. In a proof of concept, the Brown team loaded an 81-pixel image onto 25 separate molecules using a chemical reaction. It works similarly to how pharmaceuticals get components onto one molecule.

Researchers at the University of Basel in Switzerland are also attempting to reduce data-storage size through chemistry. That team explained in a media release late last year that it plans to use a technique similar to what’s used to record to CD, where metal is melted within plastic and then allowed to reform, thus encoding data. But they want to attempt it on an ultra-miniature atomic or molecular level. It's just succeeded controlling molecules in a self-organizing network.

All of the research is impressive, and like the ASU article says, “It’s unlikely Moore could have foreseen the extent of the electronics revolution currently underway."

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Patrick Nelson (see source)

BrandPost: Why Data Center Management Responsibilities Must Include Edge Data Centers

Now that edge computing has emerged as a major trend, the question for enterprises becomes how to migrate the data center management expertise acquired over many years to these new, remote environments.

Enterprise data centers have long provided a strong foundation for growth.  They enable businesses to respond more quickly to market demands. However, this agility is heavily dependent on the reliability and manageability of the data center.  As data center operational complexity increases, maintaining uptime while minimizing costs becomes a bigger challenge.

In order to maintain a high level of resiliency, existing data center best practices must now be exported to the emerging edge computing environments. In edge settings, reliability and manageability are by no means assured. The majority of workers located in edge environments (think retail store clerks, for example) lack data center or IT experience.  Yet edge IT environments have a direct impact on corporate profitability (think of a retail outlet whose cash registers and promo displays go down in the middle of the holiday shopping rush). A new way of thinking is necessary to ensure edge sites are properly managed and business agility is maintained.

The Administrative Challenge of Edge Computing

As compute power and storage are now found near a hospital bed, an off-shore oil rig, or on a factory floor, real-time decisions need to be made within a secure environment where latency is not tolerated.

Within just one global enterprise, potentially thousands of edge sites will require solutions that can help maintain application uptime and data integrity. Unlike the more centralized data center business models, on-site administrators are often not available to support edge environments. The key to addressing this challenge is to deploy tools capable of performing remote management and predictive maintenance.

New Technology Innovations for Edge Data Center Management

Fortunately, new technology innovations are now making it possible to capture the expertise needed to support the edge environments. One example of this technology is the micro data center. These prepackaged blocks of processing, storage, power and cooling are often shipped to end users fully integrated, pre-configured, pre- assembled, and pre-tested. They can begin working as soon as they are delivered and plugged in. New tools are also now available to gain the necessary management control of these distributed data centers.  For example, Schneider Electric’s cloud-based EcoStruxure IT infrastructure management software enables remote administrators to monitor critical micro data center performance details like temperature, humidity, and available backup battery runtime. These new hardware and software solutions are ensuring the availability of data center systems, regardless of remote location.

Such tools also enable predictive maintenance (knowing in advance that a particular component is likely to fail). This new level of system and component monitoring allows large retail outlets, for example, to avoid unplanned downtime. Parts can be replaced during off-hours before any failure or missed sales occur.

To learn more about how micro data centers and remote support software can help support new edge computing environments, download the Schneider Electric white paper “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge” or view this short video on data center resiliency. 

Let's block ads! (Why?)


Thanks to Brand Post (see source)

7 Photos of Equinix Los Angeles – The Center of Interconnection & Entertainment

Why the industrial IoT is more important than consumer devices — and 7 more surprising IoT trends

Given the Internet of Things’ (IoT) perch atop the hype cycle, IoT trend-spotting has become a full-time business, not just an end-of-the-year pastime. It seems every major — and minor — IoT player is busy laying out its vision of where the technology is going. Most of them harp on the same themes, of course, from massive growth to security vulnerabilities to skills shortages.

Those are all real concerns, but Chris Nelson, vice president of engineering at operational intelligence (OT) vendor OSIsoft, shared some more unique viewpoints via email. In addition to his contention that the IoT will blur the lines between IT, which runs the customers’ systems and email, and OT, which runs the technology behind the production systems, he talked about what will drive the IoT in the next year.

8 trends driving the IoT in 2019

Let’s take a closer look at the eight trends Nelson thinking about:

1. Industrial and commercial applications will drive the industry, not consumers

According to Nelson, that’s because businesses can monetize IoT’s benefits better. He cites energy consumption as a key example, noting that “industry consumes 54 percent of delivered energy worldwide, according to the Energy Information Agency, or more than consumers or transportation combined.” Reducing energy at an aluminum or paper plant by one or two percentage points, he says can mean millions of dollars in savings. A consumer cutting power consumption by 1 percent would save only a few dollars a month.

My take: True enough, but there are a lot more consumers than there are businesses, and overall consumer spending drives the U.S. economy. Still, Nelson has a point that relatively few big enterprise implementations could help jumpstart IoT usage, while energizing the mass consumer market can take years of expensive marketing to clarify sometimes complex and esoteric benefits. In addition, we can hope that industrial and enterprise IoT will be better equipped to deal with security concerns.

2. The edge will be far more important than people realize

“The edge is basically any place — a wind farm, a factory — where data is generated, analyzed, and largely stored locally,” Nelson said. “Wait? Isn’t that just a data center? Sort of. The difference is the Internet of Things.” His point is that most of the vast amounts of data that is machine-generated doesn’t need to go very far. “The people who want it and use it are generally in the same building,” he noted, quoting Gartner’s prediction that more than 50 percent of data will be generated and processed outside traditional data centers — on the edge — although “snapshots and summaries might go to the cloud for deep analytics.”

But Nelson wasn’t sure about what kind of edge architectures would prevail. The edge might function like an interim way station for the cloud, he noted, or we could see the emergence of “Zone” networking — edges within edges — that can conduct their own analytics and perform other tasks on a smaller, more efficient scale.

My take: It’s hard to disagree with the synergy between the IoT and “the edge,” but as the cloud architectures continue to dominate computing architectures, it seems like the architectural distinctions among the edge, the data center, and the cloud may start to fade.

3. Synthetic data will become a more urgent concern

Nelson defines synthetic data as “misleading information that makes good people do bad things.” He means things like hackers sending “synthetic” notifications to a control room to get operators to open gates on a reservoir, flooding a neighborhood. He calls it “Stuxnet goes mainstream.” He noted efforts by Lawrence Berkeley Lab and Aperio, among others, on various efforts to spot fake data.

My take: Given my work in the application monitoring space, "synthetic data" means something a bit different to me. But Nelson is right that hacked or spoofed IoT data is a currently under-appreciated risk.

4. Real-time data will grow in importance

Nelson cited IDC data that says real-time data will grow from 15 percent of digital data in 2017 to 30 percent in 2030, with a jump of 7x to 10 in total data volume. He predicts more innovation and investment in this area, “particularly in software that will let people understand what machines are saying.”

My take: There’s no question that using real-time data to drive real-time decisions will become increasingly important. Given the huge amounts of data involved, though, I would look for artificial intelligence (AI) and machine learning solutions to take the forefront in turning this data into action.

5. Smart equipment will begin to get momentum

Nelson predicted that “manufacturers will increasingly integrate real-time monitoring and diagnostics into equipment,” using Caterpillar’s CAT Connect engine-monitoring services and Flowserve’s building intelligence services into industrial pumps.

“Over the past five years,” he said, “we’ve seen the technology stack come together and several end-users conduct trials. Over the next five, we will see commercial adoption.”

My take: Again, I have no argument with this point, but building smarts into expensive, long-lasting industrial equipment carries its own risks in the fast-evolving world of IoT. Because IoT changes much faster than the useful life of the equipment, there’s a real risk of obsolescence unless vendors can create modular, upgradable, solutions.

6. Rules and business practices for data sharing will start to gel

Nelson posed an interesting question: “Let’s say an equipment provider provides ongoing monitoring on devices it sold or leased to an end user. Who owns that data? Most would say the end users, but what if the equipment provider conducted analytics on the raw data thereby creating a second set of information that’s more valuable than the first? Can data from one facility be anonymized and used to optimize benchmarks for another owned by a competitor? These are big questions, and no one has figured them out yet.

My take: Yes, yes, yes. But data ownership is a huge and thorny issue, and I am less confident that we’ll make real progress on solving it in 2019 or any time soon. I look for this to be an ongoing area of concern for years.

7. Traditional businesses will develop new business models out of IoT

Nelson cited small, rural utilities that have begun to sell broadband services by leveraging their investments for smart meters in a new way, as well as “large utilities and manufacturers study plans to commercialize their in-house IoT applications for predictive maintenance.”

My take: That’s only the tip of the iceberg for what many see as the holy grail of IoT. Sure, saving money is great, but the real opportunity is using IoT to create wholly new businesses. I think it’s still too early to know what those new ideas will be and which ones will take off.

8. IoT projects will have to hit their numbers

“Companies won’t fund open-ended projects,” Nelson said, and “they will want to see payoff in two years or less.”

My take: Not sure we’re there yet. Many enterprise and industrial IoT projects are still in the pilot stage, trying to figure out what their “numbers” should be. Until that gets settled, it seems a little premature to talk about “making” those numbers.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Fredric Paul (see source)

Wednesday, February 27, 2019

Sample Solution: Automating L3VPN Deployments

A long while ago I published my solution for automated L3VPN provisioning… and I’m really glad I can point you to a much better one ;)

Håkon Rørvik Aune decided to tackle the same challenge as his hands-on assignment in the Building Network Automation Solutions course and created a nicely-structured and well-documented solution (after creating a playbook that creates network diagrams from OSPF neighbor information).

Want to be able to do something similar? You missed the Spring 2019 online course, but you can get the mentored self-paced version with Expert Subscription.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

VMware preps milestone NSX release for enterprise-cloud push

Struggling Dish’s Sling TV Cuts Prices 40% for First 90 Days

Sling TV, one of the first online streaming alternatives to cable television, is slashing prices by 40% for the first three months to attract more subscribers.

Sling’s basic plans are now priced at $15 a month, with more deluxe tiers available for $25 a month for new customers.

As competitors pick up new customers, a significant number are coming from Sling TV, which is known for having one of the smallest channel lineups in the streaming industry, and DirecTV Now, which has been raising prices. To protect its flank, Sling TV is cutting prices to win back old customers and attract new ones.

Sling still has the biggest customer base among streamers with an estimated 2.42 million customers at the end of 2018. But other providers are catching up:

  • Sling TV: Has 2.42 million customers, but added less than 50,000 new customers in the last quarter of 2018.
  • YouTube TV: Estimated at 1 million subscribers, picking up 400,000 new customers in the fourth quarter of 2018.
  • Hulu TV: Now up to 1 million customers, Hulu added 500,000 new customers in the last three months of 2018.
  • DirecTV Now: Lost 267,000 subscribers in the fourth quarter, ending 2018 with 1.6 million subscribers, down from 1.86 million as of Sept. 30.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Protecting the IoT: 3 things you must include in an IoT security plan

With many IT projects, security is often an afterthought, but that approach puts the business at significant risk. The rise of IoT adds orders of magnitude more devices to a network, which creates many more entry points for threat actors to breach. A bigger problem is that many IoT devices are easier to hack than traditional IT devices, making them the endpoint of choice for the bad guys.

IoT is widely deployed in a few industries, but it is in the early innings still for most businesses. For those just starting out, IT and security leaders should be laying out their security plans for their implementations now. However, the landscape of security is wide and confusing so how to secure an IoT deployment may not be obvious. Below are three things you must consider when creating an IoT security plan.

What to include in an IoT security plan

Visibility is the foundation of IoT security

I’ve said this before, but it’s worth repeating. You can’t secure what you can’t see, so the very first step in securing IoT is knowing what’s connected. The problem is that most companies have no clue. Earlier this year, I ran a survey and asked how confident respondents were that they knew what devices were connected to the network. A whopping 61 percent said low or no confidence. What’s worse is that this is up sharply from three years ago when the number was 51 percent, showing that network and security teams are falling behind.

Visibility is the starting point, but there are several steps in getting to full visibility. This includes:

  • Device identification and discovery. It’s important to have a tool that automatically detects, profiles, and classifies what’s on the network and develops a complete inventory of devices. Once profiled, security professionals can answer key questions, such as, “What OS is on the device?” “How is it configured?” and “Is it trusted or rogue?” It’s important that the tool continuously monitors the network so a device can be discovered and profiled as soon as it is connected.
  • Predictive analysis. After discovery, the behavior of the devices should be learned and baselined so systems can react to an attack before it does any harm. Once the “norm” is established, the environment can be monitored for anomalies and then action taken. This is particularly useful for advanced persistent threats (APTs) that are “low and slow” where they remain dormant and quietly map out the environment. Any change in behavior, no matter how small, will trigger an alert.

Segmentation increases security agility, stops threats from moving laterally

This is the biggest no brainer in security today. Fortinet’s John Maddison recently talked with me about how segmentation adds flexibility and agility to the network and can protect against insider threats and spillover from malware that has infected other parts of the network. He was talking about it in the context of SD-WAN, but it’s the same problem, only magnified with IoT.

Segmentation works by assigning policies, separating assets, and managing risk. When a device is breached, segmentation stops the threat from moving laterally, as assets are classified and grouped together. For example, a policy can be established in a hospital to put all heart pumps in a secure segment. If one is breached, there is no access to medical records.

When putting together a segmentation plane, there are three key things to consider.

  • Risk identification. The first step is to classify devices by whatever criteria the company deems important. This can be users, data, devices, locations, or almost anything else. Risk should then be assigned to groups with similar risk profiles. For example, in a hospital, all MRI-related endpoints can be isolated into their own segment. If one is breached, there’s no access to medical records or other patient information.
  • Policy management. As the environment expands, new devices need to be discovered and have a policy applied to them. If a device moves, the policy needs to move with it. It’s important that policy management be fully automated because people can’t make changes fast enough to keep up with dynamic organizations. Policies are the mechanism to manage risk across the entire company.
  • Control. Once a threat actor gains access, an attacker can roam the network for weeks before acting. Isolating IoT endpoints and the other devices, servers, and ports they communicate with allows the company to separate resources on a risk basis. Choosing to treat parts of the network that interact with IoT devices differently from a policy standpoint allows the organization to control risk.

Device protection is the final step in IoT security

The priority for IoT security is to protect the device first and then the network. Once an IoT device is secured and joins the network, it must be secured in a coordinated manner with other network elements. Protecting IoT endpoints is a matter of enforcing policies correctly. This is done through the following mechanisms:

  • Policy flexibility and enforcement. The solution needs to be flexible and have the ability to define and enforce policies at the device and access level. To meet the demands of IoT, rules need to be enforced that govern device behavior, traffic types and where it can reside on the network. IoT endpoints, consumer devices, and cloud apps are examples where different policies must be established and enforced.
  • Threat intelligence. Once controls are established, it’s important to consistently enforce policies and translate compliance requirements across the network. This creates an intelligent fabric of sorts that’s self-learning and can respond to threats immediately. When intelligence is distributed across the network, action can be taken at the point of attack instead of waiting for the threat to come to a central point. The threat intelligence should be a combination of local information and global information to identify threats before they happen.

Unfortunately for network and security professionals, there is no “easy button” when it comes to securing IoT devices. However, with the right planning and preparation even the largest IoT environments can be secured so businesses can move forward with their implementations without putting the company at risk.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Zeus Kerravala (see source)

Protecting the IoT: 3 things you must include in an IoT security plan

With many IT projects, security is often an afterthought, but that approach puts the business at significant risk. The rise of IoT adds orders of magnitude more devices to a network, which creates many more entry points for threat actors to breach. A bigger problem is that many IoT devices are easier to hack than traditional IT devices, making them the endpoint of choice for the bad guys.

IoT is widely deployed in a few industries, but it is in the early innings still for most businesses. For those just starting out, IT and security leaders should be laying out their security plans for their implementations now. However, the landscape of security is wide and confusing so how to secure an IoT deployment may not be obvious. Below are three things you must consider when creating an IoT security plan.

What to include in an IoT security plan

Visibility is the foundation of IoT security

I’ve said this before, but it’s worth repeating. You can’t secure what you can’t see, so the very first step in securing IoT is knowing what’s connected. The problem is that most companies have no clue. Earlier this year, I ran a survey and asked how confident respondents were that they knew what devices were connected to the network. A whopping 61 percent said low or no confidence. What’s worse is that this is up sharply from three years ago when the number was 51 percent, showing that network and security teams are falling behind.

Visibility is the starting point, but there are several steps in getting to full visibility. This includes:

  • Device identification and discovery. It’s important to have a tool that automatically detects, profiles, and classifies what’s on the network and develops a complete inventory of devices. Once profiled, security professionals can answer key questions, such as, “What OS is on the device?” “How is it configured?” and “Is it trusted or rogue?” It’s important that the tool continuously monitors the network so a device can be discovered and profiled as soon as it is connected.
  • Predictive analysis. After discovery, the behavior of the devices should be learned and baselined so systems can react to an attack before it does any harm. Once the “norm” is established, the environment can be monitored for anomalies and then action taken. This is particularly useful for advanced persistent threats (APTs) that are “low and slow” where they remain dormant and quietly map out the environment. Any change in behavior, no matter how small, will trigger an alert.

Segmentation increases security agility, stops threats from moving laterally

This is the biggest no brainer in security today. Fortinet’s John Maddison recently talked with me about how segmentation adds flexibility and agility to the network and can protect against insider threats and spillover from malware that has infected other parts of the network. He was talking about it in the context of SD-WAN, but it’s the same problem, only magnified with IoT.

Segmentation works by assigning policies, separating assets, and managing risk. When a device is breached, segmentation stops the threat from moving laterally, as assets are classified and grouped together. For example, a policy can be established in a hospital to put all heart pumps in a secure segment. If one is breached, there is no access to medical records.

When putting together a segmentation plane, there are three key things to consider.

  • Risk identification. The first step is to classify devices by whatever criteria the company deems important. This can be users, data, devices, locations, or almost anything else. Risk should then be assigned to groups with similar risk profiles. For example, in a hospital, all MRI-related endpoints can be isolated into their own segment. If one is breached, there’s no access to medical records or other patient information.
  • Policy management. As the environment expands, new devices need to be discovered and have a policy applied to them. If a device moves, the policy needs to move with it. It’s important that policy management be fully automated because people can’t make changes fast enough to keep up with dynamic organizations. Policies are the mechanism to manage risk across the entire company.
  • Control. Once a threat actor gains access, an attacker can roam the network for weeks before acting. Isolating IoT endpoints and the other devices, servers, and ports they communicate with allows the company to separate resources on a risk basis. Choosing to treat parts of the network that interact with IoT devices differently from a policy standpoint allows the organization to control risk.

Device protection is the final step in IoT security

The priority for IoT security is to protect the device first and then the network. Once an IoT device is secured and joins the network, it must be secured in a coordinated manner with other network elements. Protecting IoT endpoints is a matter of enforcing policies correctly. This is done through the following mechanisms:

  • Policy flexibility and enforcement. The solution needs to be flexible and have the ability to define and enforce policies at the device and access level. To meet the demands of IoT, rules need to be enforced that govern device behavior, traffic types and where it can reside on the network. IoT endpoints, consumer devices, and cloud apps are examples where different policies must be established and enforced.
  • Threat intelligence. Once controls are established, it’s important to consistently enforce policies and translate compliance requirements across the network. This creates an intelligent fabric of sorts that’s self-learning and can respond to threats immediately. When intelligence is distributed across the network, action can be taken at the point of attack instead of waiting for the threat to come to a central point. The threat intelligence should be a combination of local information and global information to identify threats before they happen.

Unfortunately for network and security professionals, there is no “easy button” when it comes to securing IoT devices. However, with the right planning and preparation even the largest IoT environments can be secured so businesses can move forward with their implementations without putting the company at risk.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Zeus Kerravala (see source)

The big picture: Is IoT in the enterprise about making money or saving money?

Data Gravity and Cloud Security