Thursday, February 28, 2019

Frontier: Losing Customers While Raising Prices; Company Loses $643 Million in 2018

In the last three months of 2018, Frontier Communications reported it said goodbye to 67,000 broadband customers, lost $643 million in revenue year-over-year, and had to write down the value of its assets and business by $241 million, as the company struggles with a deteriorating copper wire network in many states where it operates.

But Wall Street was pleased the company’s latest quarterly results were not worse, and helped lift Frontier’s stock from $2.42 to $2.96 this afternoon, still down considerably from the $125 a share price the company commanded just four years ago.

Frontier’s fourth quarter 2018 financial results arrived the same week Windstream, another independent telephone company, declared Chapter 11 bankruptcy reorganization. Life is rough for the nation’s legacy telephone companies, especially those that have continued to depend on copper wire infrastructure that, in some cases, was attached to poles during the Johnson or Nixon Administrations.

Frontier Communications CEO Dan McCarthy is the telephone company’s version of Sears’ former CEO Edward Lampert. Perpetually optimistic, McCarthy has been embarked on a long-term ‘transformation’ strategy at Frontier, to wring additional profit out of the business that provides service to customers in 29 states. Much of that effort has been focused on cost-cutting measures, including layoffs of 1,560 workers last year, a sale of wireless towers, and various plans to make business operations more efficient, delivering mixed results.

McCarthy

Frontier’s efforts to improve customer service have been hampered by the quality and pricing of its services, which can bring complaints from customers, many who eventually depart. Frontier’s overall health continues to decline, financially gaining mostly through rate increases and new hidden fees and surcharges. In fact, much of Frontier’s latest revenue improvements come almost entirely from charging customers more for the same service.

McCarthy calls it ‘cost recovery’ and ‘steady-state pricing.’

“One of the things that we’ve been focused on really for the better part of two years is …. taking advantage of pricing opportunities [and] recovering content costs — really dealing with customers moving from promotional pricing to steady-state pricing, and then offering different opportunities for customers both from a speed and package perspective,” McCarthy said Tuesday. “The quarter really was about us targeting customers very selectively and really trying to improve customer lifetime value.”

By “selectively,” McCarthy means being willing to let promotion-seeking customers go and being less amenable to customers trying to negotiate for a lower bill. The result, so far, is 103,000 service disconnects over the past three months and 379,000 fewer customers over the past year. A good number of those customers were subscribed to Frontier FiOS fiber to the home service, but still left for a cable company or competing fiber provider, often because Frontier kept raising their bill.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Cisco warns a critical patch is needed for a remote access firewall, VPN and router

Cisco is warning organizations with remote users that have deployed a particular Cisco wireless firewall, VPN and router to patch a critical vulnerability in each that could let attackers break into the network.

The vulnerability, which has an impact rating of 9.8 out of 10 on the Common Vulnerability Scoring System lets a potential attacker send malicious HTTP requests to a targeted device. A successful exploit could let the attacker execute arbitrary code on the underlying operating system of the affected device as a high-privilege user, Cisco stated.

The vulnerability is in the web-based management interface of three products: Cisco’s RV110W Wireless-N VPN Firewall, RV130W Wireless-N Multifunction VPN Router and RV215W Wireless-N VPN Router. All three products are positioned as remote-access communications and security devices.

The web-based management interface of these devices is available through a local LAN connection or the remote-management feature and by default, the remote management feature is disabled for these devices, Cisco said in its Security Advisory.

It said administrators can determine whether the remote-management feature is enabled for a device, by opening the web-based management interface and choose “Basic Settings > Remote Management.” If the “Enable” box is checked, remote management is enabled for the device.

The vulnerability is due to improper validation of user-supplied data in the web-based management interface, Cisco said.

Cisco has released software updates that address this vulnerability and customers should check their software license agreement for more details. 

Cisco warned of other developing security problems this week.

Elasticsearch

Cisco’s Talos security researchers warned that users need to keep a close eye on unsecured Elasticsearch clusters. Elasticsearch is an open-source distributed search and analytics engine built on Apache Lucene. 

“We have recently observed a spike in attacks from multiple threat actors targeting these clusters,” Talos stated.  In a post, Talos wrote that attackers are targeting clusters using versions 1.4.2 and lower, and are leveraging old vulnerabilities to pass scripts to search queries and drop the attacker’s payloads. These scripts are being leveraged to drop both malware and cryptocurrency-miners on victim machines.

Talos also wrote that it has identified social-media accounts associated with one of these threat actors. “Because Elasticsearch is typically used to manage very large datasets, the repercussions of a successful attack on a cluster could be devastating due to the amount of data present. This post details the attack methods used by each threat actor, as well as the associated payloads,” Cisco wrote.

Docker and Kubernetes

Cisco continues to watch a run-time security issue with Docker and Kubernetes containers. “The vulnerability exists because the affected software improperly handles file descriptors related to /proc/self/exe. An attacker could exploit the vulnerability either by persuading a user to create a new container using an attacker-controlled image or by using the docker exec command to attach into an existing container that the attacker already has write access to,” Cisco wrote.

“A successful exploit could allow the attacker to overwrite the host's runc binary file with a malicious file, escape the container, and execute arbitrary commands with root privileges on the host system,” Cisco stated.  So far Cisco has identified only three of its products as susceptible to the vulnerability: Cisco Container Platform, Cloudlock and Defense Orchestrator.  It is evaluating other products, such as the widely used IOS XE Software package.

Webex

Cisco issued a third patch-of-a-patch for its Webex system. Specifically Cisco said in an advisory that a vulnerability in the update service of Cisco Webex Meetings Desktop App and Cisco Webex Productivity Tools for Windows could allow an authenticated, local attacker to execute arbitrary commands as a privileged user. The company issued patches to address the problem in October and November, but the issue persisted.

“The vulnerability is due to insufficient validation of user-supplied parameters. An attacker could exploit this vulnerability by invoking the update service command with a crafted argument. An exploit could allow the attacker to run arbitrary commands with SYSTEM user privileges,” Cisco stated. 

The vulnerability affects all Cisco Webex Meetings Desktop App releases prior to 33.6.6, and Cisco Webex Productivity Tools Releases 32.6.0 and later prior to 33.0.7, when running on a Microsoft Windows end-user system.

Details on how to address this patch are here.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

New chemistry-based data storage would blow Moore’s Law out of the water

Molecular electronics, where charges move through tiny, sole molecules, could be the future of computing and, in particular, storage, some scientists say.

Researchers at Arizona State University (ASU) point out that a molecule-level computing technique, if its development succeeds, would slam Gordon Moore’s 1965 prophesy — Moore's Law — that the number of transistors on a chip will double every year, and thus allow electronics to get proportionally smaller. In this case, hardware, including transistors, will conceivably fit on individual molecules, reducing chip sizes much more significantly than Moore ever envisaged.

“The intersection of physical and chemical properties occurring at the molecular scale” is now being explored, and shows promise, an ASU article says. The researchers think Moore’s miniaturization projections will be blown out of the water.

Ultra-miniaturization, using chemistry and its molecules and atoms, has been on the scientific community radar for a while. However, it’s been rocky—temperature has been a problem, among other things.

One big issue, which may be about to be solved, is related to controlling flowing electrons. The flowing current, acting like a wave, gets interfered with—a bit like a water wave. The trouble is called quantum interference and is an area in which the researchers claim to be making progress.

Researchers want to get a handle on “not only measuring quantum phenomena in single molecules, but also controlling them,” says Nongjian "NJ" Tao, director of the ASU's Biodesign Center for Bioelectronics and Biosensors, in the article.

He says that by figuring the charge-transport properties better, they’ll be able to develop the new, ultra-tiny electronics devices. If successful, data storage equipment and the general processing of information could end up operating through high-speed, high-power molecular switches. Transistors and rectifiers could also become molecular scale. Miniaturization-limiting silicon could be replaced.

“A single organic molecule suspended between a pair of electrodes as a current is passed through the tiny structure” is the foundation for the experiments, the school explains. A system called electrochemical gating, where conductance is controlled is then used. It manages the interference and is related to how “waves in water can combine to form a larger wave or cancel one another out, depending on their phase.” Through this science, the researchers say they’ve been able to, for the first time ever, fine-tune conductance in a molecule. That's a big step. Capacitance is the storing of electrical charge.

Other chemistry-related data storage research

I’ve written before about chemistry “superseding traditional engineering” in shrinking data storage. Last year, unrelated to this ASU and others’ quantum interference project, Brown University said it was working on ways to store terabytes of data chemically in a flask of liquid.

“Synthetic molecule storage in liquids could one day replace hard drives,” I wrote. In a proof of concept, the Brown team loaded an 81-pixel image onto 25 separate molecules using a chemical reaction. It works similarly to how pharmaceuticals get components onto one molecule.

Researchers at the University of Basel in Switzerland are also attempting to reduce data-storage size through chemistry. That team explained in a media release late last year that it plans to use a technique similar to what’s used to record to CD, where metal is melted within plastic and then allowed to reform, thus encoding data. But they want to attempt it on an ultra-miniature atomic or molecular level. It's just succeeded controlling molecules in a self-organizing network.

All of the research is impressive, and like the ASU article says, “It’s unlikely Moore could have foreseen the extent of the electronics revolution currently underway."

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Patrick Nelson (see source)

BrandPost: Why Data Center Management Responsibilities Must Include Edge Data Centers

Now that edge computing has emerged as a major trend, the question for enterprises becomes how to migrate the data center management expertise acquired over many years to these new, remote environments.

Enterprise data centers have long provided a strong foundation for growth.  They enable businesses to respond more quickly to market demands. However, this agility is heavily dependent on the reliability and manageability of the data center.  As data center operational complexity increases, maintaining uptime while minimizing costs becomes a bigger challenge.

In order to maintain a high level of resiliency, existing data center best practices must now be exported to the emerging edge computing environments. In edge settings, reliability and manageability are by no means assured. The majority of workers located in edge environments (think retail store clerks, for example) lack data center or IT experience.  Yet edge IT environments have a direct impact on corporate profitability (think of a retail outlet whose cash registers and promo displays go down in the middle of the holiday shopping rush). A new way of thinking is necessary to ensure edge sites are properly managed and business agility is maintained.

The Administrative Challenge of Edge Computing

As compute power and storage are now found near a hospital bed, an off-shore oil rig, or on a factory floor, real-time decisions need to be made within a secure environment where latency is not tolerated.

Within just one global enterprise, potentially thousands of edge sites will require solutions that can help maintain application uptime and data integrity. Unlike the more centralized data center business models, on-site administrators are often not available to support edge environments. The key to addressing this challenge is to deploy tools capable of performing remote management and predictive maintenance.

New Technology Innovations for Edge Data Center Management

Fortunately, new technology innovations are now making it possible to capture the expertise needed to support the edge environments. One example of this technology is the micro data center. These prepackaged blocks of processing, storage, power and cooling are often shipped to end users fully integrated, pre-configured, pre- assembled, and pre-tested. They can begin working as soon as they are delivered and plugged in. New tools are also now available to gain the necessary management control of these distributed data centers.  For example, Schneider Electric’s cloud-based EcoStruxure IT infrastructure management software enables remote administrators to monitor critical micro data center performance details like temperature, humidity, and available backup battery runtime. These new hardware and software solutions are ensuring the availability of data center systems, regardless of remote location.

Such tools also enable predictive maintenance (knowing in advance that a particular component is likely to fail). This new level of system and component monitoring allows large retail outlets, for example, to avoid unplanned downtime. Parts can be replaced during off-hours before any failure or missed sales occur.

To learn more about how micro data centers and remote support software can help support new edge computing environments, download the Schneider Electric white paper “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge” or view this short video on data center resiliency. 

Let's block ads! (Why?)


Thanks to Brand Post (see source)

7 Photos of Equinix Los Angeles – The Center of Interconnection & Entertainment

Why the industrial IoT is more important than consumer devices — and 7 more surprising IoT trends

Given the Internet of Things’ (IoT) perch atop the hype cycle, IoT trend-spotting has become a full-time business, not just an end-of-the-year pastime. It seems every major — and minor — IoT player is busy laying out its vision of where the technology is going. Most of them harp on the same themes, of course, from massive growth to security vulnerabilities to skills shortages.

Those are all real concerns, but Chris Nelson, vice president of engineering at operational intelligence (OT) vendor OSIsoft, shared some more unique viewpoints via email. In addition to his contention that the IoT will blur the lines between IT, which runs the customers’ systems and email, and OT, which runs the technology behind the production systems, he talked about what will drive the IoT in the next year.

8 trends driving the IoT in 2019

Let’s take a closer look at the eight trends Nelson thinking about:

1. Industrial and commercial applications will drive the industry, not consumers

According to Nelson, that’s because businesses can monetize IoT’s benefits better. He cites energy consumption as a key example, noting that “industry consumes 54 percent of delivered energy worldwide, according to the Energy Information Agency, or more than consumers or transportation combined.” Reducing energy at an aluminum or paper plant by one or two percentage points, he says can mean millions of dollars in savings. A consumer cutting power consumption by 1 percent would save only a few dollars a month.

My take: True enough, but there are a lot more consumers than there are businesses, and overall consumer spending drives the U.S. economy. Still, Nelson has a point that relatively few big enterprise implementations could help jumpstart IoT usage, while energizing the mass consumer market can take years of expensive marketing to clarify sometimes complex and esoteric benefits. In addition, we can hope that industrial and enterprise IoT will be better equipped to deal with security concerns.

2. The edge will be far more important than people realize

“The edge is basically any place — a wind farm, a factory — where data is generated, analyzed, and largely stored locally,” Nelson said. “Wait? Isn’t that just a data center? Sort of. The difference is the Internet of Things.” His point is that most of the vast amounts of data that is machine-generated doesn’t need to go very far. “The people who want it and use it are generally in the same building,” he noted, quoting Gartner’s prediction that more than 50 percent of data will be generated and processed outside traditional data centers — on the edge — although “snapshots and summaries might go to the cloud for deep analytics.”

But Nelson wasn’t sure about what kind of edge architectures would prevail. The edge might function like an interim way station for the cloud, he noted, or we could see the emergence of “Zone” networking — edges within edges — that can conduct their own analytics and perform other tasks on a smaller, more efficient scale.

My take: It’s hard to disagree with the synergy between the IoT and “the edge,” but as the cloud architectures continue to dominate computing architectures, it seems like the architectural distinctions among the edge, the data center, and the cloud may start to fade.

3. Synthetic data will become a more urgent concern

Nelson defines synthetic data as “misleading information that makes good people do bad things.” He means things like hackers sending “synthetic” notifications to a control room to get operators to open gates on a reservoir, flooding a neighborhood. He calls it “Stuxnet goes mainstream.” He noted efforts by Lawrence Berkeley Lab and Aperio, among others, on various efforts to spot fake data.

My take: Given my work in the application monitoring space, "synthetic data" means something a bit different to me. But Nelson is right that hacked or spoofed IoT data is a currently under-appreciated risk.

4. Real-time data will grow in importance

Nelson cited IDC data that says real-time data will grow from 15 percent of digital data in 2017 to 30 percent in 2030, with a jump of 7x to 10 in total data volume. He predicts more innovation and investment in this area, “particularly in software that will let people understand what machines are saying.”

My take: There’s no question that using real-time data to drive real-time decisions will become increasingly important. Given the huge amounts of data involved, though, I would look for artificial intelligence (AI) and machine learning solutions to take the forefront in turning this data into action.

5. Smart equipment will begin to get momentum

Nelson predicted that “manufacturers will increasingly integrate real-time monitoring and diagnostics into equipment,” using Caterpillar’s CAT Connect engine-monitoring services and Flowserve’s building intelligence services into industrial pumps.

“Over the past five years,” he said, “we’ve seen the technology stack come together and several end-users conduct trials. Over the next five, we will see commercial adoption.”

My take: Again, I have no argument with this point, but building smarts into expensive, long-lasting industrial equipment carries its own risks in the fast-evolving world of IoT. Because IoT changes much faster than the useful life of the equipment, there’s a real risk of obsolescence unless vendors can create modular, upgradable, solutions.

6. Rules and business practices for data sharing will start to gel

Nelson posed an interesting question: “Let’s say an equipment provider provides ongoing monitoring on devices it sold or leased to an end user. Who owns that data? Most would say the end users, but what if the equipment provider conducted analytics on the raw data thereby creating a second set of information that’s more valuable than the first? Can data from one facility be anonymized and used to optimize benchmarks for another owned by a competitor? These are big questions, and no one has figured them out yet.

My take: Yes, yes, yes. But data ownership is a huge and thorny issue, and I am less confident that we’ll make real progress on solving it in 2019 or any time soon. I look for this to be an ongoing area of concern for years.

7. Traditional businesses will develop new business models out of IoT

Nelson cited small, rural utilities that have begun to sell broadband services by leveraging their investments for smart meters in a new way, as well as “large utilities and manufacturers study plans to commercialize their in-house IoT applications for predictive maintenance.”

My take: That’s only the tip of the iceberg for what many see as the holy grail of IoT. Sure, saving money is great, but the real opportunity is using IoT to create wholly new businesses. I think it’s still too early to know what those new ideas will be and which ones will take off.

8. IoT projects will have to hit their numbers

“Companies won’t fund open-ended projects,” Nelson said, and “they will want to see payoff in two years or less.”

My take: Not sure we’re there yet. Many enterprise and industrial IoT projects are still in the pilot stage, trying to figure out what their “numbers” should be. Until that gets settled, it seems a little premature to talk about “making” those numbers.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Fredric Paul (see source)

Wednesday, February 27, 2019

Sample Solution: Automating L3VPN Deployments

A long while ago I published my solution for automated L3VPN provisioning… and I’m really glad I can point you to a much better one ;)

Håkon Rørvik Aune decided to tackle the same challenge as his hands-on assignment in the Building Network Automation Solutions course and created a nicely-structured and well-documented solution (after creating a playbook that creates network diagrams from OSPF neighbor information).

Want to be able to do something similar? You missed the Spring 2019 online course, but you can get the mentored self-paced version with Expert Subscription.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

VMware preps milestone NSX release for enterprise-cloud push

Struggling Dish’s Sling TV Cuts Prices 40% for First 90 Days

Sling TV, one of the first online streaming alternatives to cable television, is slashing prices by 40% for the first three months to attract more subscribers.

Sling’s basic plans are now priced at $15 a month, with more deluxe tiers available for $25 a month for new customers.

As competitors pick up new customers, a significant number are coming from Sling TV, which is known for having one of the smallest channel lineups in the streaming industry, and DirecTV Now, which has been raising prices. To protect its flank, Sling TV is cutting prices to win back old customers and attract new ones.

Sling still has the biggest customer base among streamers with an estimated 2.42 million customers at the end of 2018. But other providers are catching up:

  • Sling TV: Has 2.42 million customers, but added less than 50,000 new customers in the last quarter of 2018.
  • YouTube TV: Estimated at 1 million subscribers, picking up 400,000 new customers in the fourth quarter of 2018.
  • Hulu TV: Now up to 1 million customers, Hulu added 500,000 new customers in the last three months of 2018.
  • DirecTV Now: Lost 267,000 subscribers in the fourth quarter, ending 2018 with 1.6 million subscribers, down from 1.86 million as of Sept. 30.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Protecting the IoT: 3 things you must include in an IoT security plan

With many IT projects, security is often an afterthought, but that approach puts the business at significant risk. The rise of IoT adds orders of magnitude more devices to a network, which creates many more entry points for threat actors to breach. A bigger problem is that many IoT devices are easier to hack than traditional IT devices, making them the endpoint of choice for the bad guys.

IoT is widely deployed in a few industries, but it is in the early innings still for most businesses. For those just starting out, IT and security leaders should be laying out their security plans for their implementations now. However, the landscape of security is wide and confusing so how to secure an IoT deployment may not be obvious. Below are three things you must consider when creating an IoT security plan.

What to include in an IoT security plan

Visibility is the foundation of IoT security

I’ve said this before, but it’s worth repeating. You can’t secure what you can’t see, so the very first step in securing IoT is knowing what’s connected. The problem is that most companies have no clue. Earlier this year, I ran a survey and asked how confident respondents were that they knew what devices were connected to the network. A whopping 61 percent said low or no confidence. What’s worse is that this is up sharply from three years ago when the number was 51 percent, showing that network and security teams are falling behind.

Visibility is the starting point, but there are several steps in getting to full visibility. This includes:

  • Device identification and discovery. It’s important to have a tool that automatically detects, profiles, and classifies what’s on the network and develops a complete inventory of devices. Once profiled, security professionals can answer key questions, such as, “What OS is on the device?” “How is it configured?” and “Is it trusted or rogue?” It’s important that the tool continuously monitors the network so a device can be discovered and profiled as soon as it is connected.
  • Predictive analysis. After discovery, the behavior of the devices should be learned and baselined so systems can react to an attack before it does any harm. Once the “norm” is established, the environment can be monitored for anomalies and then action taken. This is particularly useful for advanced persistent threats (APTs) that are “low and slow” where they remain dormant and quietly map out the environment. Any change in behavior, no matter how small, will trigger an alert.

Segmentation increases security agility, stops threats from moving laterally

This is the biggest no brainer in security today. Fortinet’s John Maddison recently talked with me about how segmentation adds flexibility and agility to the network and can protect against insider threats and spillover from malware that has infected other parts of the network. He was talking about it in the context of SD-WAN, but it’s the same problem, only magnified with IoT.

Segmentation works by assigning policies, separating assets, and managing risk. When a device is breached, segmentation stops the threat from moving laterally, as assets are classified and grouped together. For example, a policy can be established in a hospital to put all heart pumps in a secure segment. If one is breached, there is no access to medical records.

When putting together a segmentation plane, there are three key things to consider.

  • Risk identification. The first step is to classify devices by whatever criteria the company deems important. This can be users, data, devices, locations, or almost anything else. Risk should then be assigned to groups with similar risk profiles. For example, in a hospital, all MRI-related endpoints can be isolated into their own segment. If one is breached, there’s no access to medical records or other patient information.
  • Policy management. As the environment expands, new devices need to be discovered and have a policy applied to them. If a device moves, the policy needs to move with it. It’s important that policy management be fully automated because people can’t make changes fast enough to keep up with dynamic organizations. Policies are the mechanism to manage risk across the entire company.
  • Control. Once a threat actor gains access, an attacker can roam the network for weeks before acting. Isolating IoT endpoints and the other devices, servers, and ports they communicate with allows the company to separate resources on a risk basis. Choosing to treat parts of the network that interact with IoT devices differently from a policy standpoint allows the organization to control risk.

Device protection is the final step in IoT security

The priority for IoT security is to protect the device first and then the network. Once an IoT device is secured and joins the network, it must be secured in a coordinated manner with other network elements. Protecting IoT endpoints is a matter of enforcing policies correctly. This is done through the following mechanisms:

  • Policy flexibility and enforcement. The solution needs to be flexible and have the ability to define and enforce policies at the device and access level. To meet the demands of IoT, rules need to be enforced that govern device behavior, traffic types and where it can reside on the network. IoT endpoints, consumer devices, and cloud apps are examples where different policies must be established and enforced.
  • Threat intelligence. Once controls are established, it’s important to consistently enforce policies and translate compliance requirements across the network. This creates an intelligent fabric of sorts that’s self-learning and can respond to threats immediately. When intelligence is distributed across the network, action can be taken at the point of attack instead of waiting for the threat to come to a central point. The threat intelligence should be a combination of local information and global information to identify threats before they happen.

Unfortunately for network and security professionals, there is no “easy button” when it comes to securing IoT devices. However, with the right planning and preparation even the largest IoT environments can be secured so businesses can move forward with their implementations without putting the company at risk.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Zeus Kerravala (see source)

Protecting the IoT: 3 things you must include in an IoT security plan

With many IT projects, security is often an afterthought, but that approach puts the business at significant risk. The rise of IoT adds orders of magnitude more devices to a network, which creates many more entry points for threat actors to breach. A bigger problem is that many IoT devices are easier to hack than traditional IT devices, making them the endpoint of choice for the bad guys.

IoT is widely deployed in a few industries, but it is in the early innings still for most businesses. For those just starting out, IT and security leaders should be laying out their security plans for their implementations now. However, the landscape of security is wide and confusing so how to secure an IoT deployment may not be obvious. Below are three things you must consider when creating an IoT security plan.

What to include in an IoT security plan

Visibility is the foundation of IoT security

I’ve said this before, but it’s worth repeating. You can’t secure what you can’t see, so the very first step in securing IoT is knowing what’s connected. The problem is that most companies have no clue. Earlier this year, I ran a survey and asked how confident respondents were that they knew what devices were connected to the network. A whopping 61 percent said low or no confidence. What’s worse is that this is up sharply from three years ago when the number was 51 percent, showing that network and security teams are falling behind.

Visibility is the starting point, but there are several steps in getting to full visibility. This includes:

  • Device identification and discovery. It’s important to have a tool that automatically detects, profiles, and classifies what’s on the network and develops a complete inventory of devices. Once profiled, security professionals can answer key questions, such as, “What OS is on the device?” “How is it configured?” and “Is it trusted or rogue?” It’s important that the tool continuously monitors the network so a device can be discovered and profiled as soon as it is connected.
  • Predictive analysis. After discovery, the behavior of the devices should be learned and baselined so systems can react to an attack before it does any harm. Once the “norm” is established, the environment can be monitored for anomalies and then action taken. This is particularly useful for advanced persistent threats (APTs) that are “low and slow” where they remain dormant and quietly map out the environment. Any change in behavior, no matter how small, will trigger an alert.

Segmentation increases security agility, stops threats from moving laterally

This is the biggest no brainer in security today. Fortinet’s John Maddison recently talked with me about how segmentation adds flexibility and agility to the network and can protect against insider threats and spillover from malware that has infected other parts of the network. He was talking about it in the context of SD-WAN, but it’s the same problem, only magnified with IoT.

Segmentation works by assigning policies, separating assets, and managing risk. When a device is breached, segmentation stops the threat from moving laterally, as assets are classified and grouped together. For example, a policy can be established in a hospital to put all heart pumps in a secure segment. If one is breached, there is no access to medical records.

When putting together a segmentation plane, there are three key things to consider.

  • Risk identification. The first step is to classify devices by whatever criteria the company deems important. This can be users, data, devices, locations, or almost anything else. Risk should then be assigned to groups with similar risk profiles. For example, in a hospital, all MRI-related endpoints can be isolated into their own segment. If one is breached, there’s no access to medical records or other patient information.
  • Policy management. As the environment expands, new devices need to be discovered and have a policy applied to them. If a device moves, the policy needs to move with it. It’s important that policy management be fully automated because people can’t make changes fast enough to keep up with dynamic organizations. Policies are the mechanism to manage risk across the entire company.
  • Control. Once a threat actor gains access, an attacker can roam the network for weeks before acting. Isolating IoT endpoints and the other devices, servers, and ports they communicate with allows the company to separate resources on a risk basis. Choosing to treat parts of the network that interact with IoT devices differently from a policy standpoint allows the organization to control risk.

Device protection is the final step in IoT security

The priority for IoT security is to protect the device first and then the network. Once an IoT device is secured and joins the network, it must be secured in a coordinated manner with other network elements. Protecting IoT endpoints is a matter of enforcing policies correctly. This is done through the following mechanisms:

  • Policy flexibility and enforcement. The solution needs to be flexible and have the ability to define and enforce policies at the device and access level. To meet the demands of IoT, rules need to be enforced that govern device behavior, traffic types and where it can reside on the network. IoT endpoints, consumer devices, and cloud apps are examples where different policies must be established and enforced.
  • Threat intelligence. Once controls are established, it’s important to consistently enforce policies and translate compliance requirements across the network. This creates an intelligent fabric of sorts that’s self-learning and can respond to threats immediately. When intelligence is distributed across the network, action can be taken at the point of attack instead of waiting for the threat to come to a central point. The threat intelligence should be a combination of local information and global information to identify threats before they happen.

Unfortunately for network and security professionals, there is no “easy button” when it comes to securing IoT devices. However, with the right planning and preparation even the largest IoT environments can be secured so businesses can move forward with their implementations without putting the company at risk.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Zeus Kerravala (see source)

The big picture: Is IoT in the enterprise about making money or saving money?

Data Gravity and Cloud Security

How to move to a disruptive network technology with minimal disruption

Disruptive network technologies are great—at least until they threaten to disrupt essential everyday network services and activities. That's when it's time to consider how innovations such as SDN, SD-WAN, intent-based networking (IBN) and network functions virtualization (NFV) can be transitioned into place without losing a beat.

"To be disruptive, some disruption is often involved," says John Smith, CTO and co- founder of LiveAction, a network performance software provider. "The best way to limit this is to use proven technology versus something brand new—you never want to be the test case."

Smith suggests limiting risk by following a crawl, walk and run approach. "Define the use case and solve it while initially limiting the risk exposure to a discrete set of end users for proof of concept testing," he says. "It’s always good to ensure that the business case will drive the need for the disruptive networking technology—it helps justify the action.”

"Starting with a smaller proof of concept in a non-production environment is great way to get comfortable with the tech and gain some early operational experience," advises Shannon Weyrick, vice president of architecture at DNS and traffic management technologies provider NS1. Before launching any disruptive technology, make sure everyone involved recognizes the value, understands the technology and rollout process and agrees on the goals and metrics, he adds.

Switching safely to SDNs

Software defined networking (SDN) is designed to make networks both manageable and agile. Utilizing a proven technology that’s been in the field successfully is vital to ensuring minimal disruption around SDN deployments, Smith says. "On the data center side, Cisco ACI and VMWare NSX are reliable infrastructure technologies, but it really depends what fits best with the business," he observes.

Full network visibility is essential to minimizing disruption, as an SDN installation works out its inevitable start-up kinks. "Having visibility solutions in place, such as network performance monitoring and diagnostic (NPMD) tools, can eliminate deployment errors and quickly isolate issues," Smith explains.

Kiran Chitturi, CTO architect at Sungard Availability Services, an IT protection and recovery services provider, recommends choosing an approach that embraces open standards and encourages an open ecosystem between customers, developers and partners. "Before adopting at scale, be patient in selecting specific use-cases like optimizing networks for specific workloads, accessing control limits and so on," he says.

Start with the open source and open specification projects, suggests Amy Wheelus, network cloud vice president at AT&T. For the cloud infrastructure, the go-to open source project is OpenStack, with many operators and different use cases, including at the edge. For the service orchestration layer, ONAP is the largest project in Open Source, she notes. "At AT&T we have launched our mobile 5G network using several open source software components, OpenStack, Airship and ONAP."

Weyrick recommends "canarying" traffic before relying on it in production. "Bringing up a new, unused private subnet on existing production servers alongside existing interfaces and transitioning less-critical traffic, such as operational metrics, is one method," he says. "This allows you to get experience deploying and operating the various components of the SDN, prove operational reliability and gain confidence as you increase the percentage of traffic being transited by the new stack."

It's also important to have a backup plan on hand. "Even the best-laid plans need a fallback," Weyrick says. "Make sure you have a plan for alternate transit for critical subsystems should the SDN fail." Ideally, such strategy would include automated failover. "But even a manual plan, thought out ahead of time in worst-case scenarios, may prove helpful and increase your confidence during and after transition," he adds.

The many paths to SD-WAN adoption

There are many ways to take advantage of SD-WAN, which applies SDN's benefits to wide area networks. "Customers can leverage existing infrastructure from vendors like Riverbed or Cisco," Smith says. Organizations can also opt for new features added to security appliances, like Fortinet and WatchGuard, or they can leverage virtual forms from SaaS service providers.

Regardless of the tools selected, Smith recommends piloting the technology at a handful of selected sites. "Document the lessons you learn and use that [information] to write the MOPs (methods of procedures) needed for site cutovers once full deployment begins." He notes that potential adopters also need to understand how the technology scales. "The pilot may run great, but when you scale up to the number of planned sites you may hit unforeseen issues if you haven’t planned for them," Smith says.

SD-WAN implies the use of a controller to manage connections between a company’s branch offices, states Andrei Lipnitski, an information communication and technology department manager at software development company ScienceSoft. "To move to SD-WAN, the company needs one controller installed in the head office, with a system administrator to manage it, and to configure SD-WAN routers, which replace outdated hardware used in branch offices," he says.

SD-WAN architectures that support passthrough network setups allow the existing network setup to be left unchanged. "Once that install is complete, the network should function identical as before," notes Jay Hakin, CEO of Mushroom Networks, an SD-WAN vendor. "A good practice is to have a period of time where this setup is left running to make sure all applications and cloud services are running uninterrupted," he adds. Once reliable operation has been confirmed, adding additional WAN resources to the SD-WAN appliance, as well as adding configurations for any additional advanced features, becomes a staged and scheduled network modification and therefore does not inherit any downtime risk, Hakin notes.

IBN's unintended consequences

Intent-based networking (IBN) technology advances SDN an additional step. While SDNs have largely automated most network management processes, a growing number of organizations now require even greater capabilities from their networks in order to manage their digital transformation and ultimately assure that their network is operating as planned.

IBN allows administrators to set specific network policies and then rely on automation to ensure that those policies are implemented. "There's a lot of hype and misinformation around IBN," Smith says. "There's still some debate about what it actually can and can’t do ... so customers need to really investigate and spend time understanding what’s real and what’s theoretical," he cautions.

Adopting IBN without risking disruption requires a great deal of patience and practice, observes Tim Parker, vice president of network strategy at data center and colocation provider Flexential. "The more difficult part that almost outweighs the benefits is moving to ACI (application centric infrastructure) or new operating systems that support [IBN]," he explains. "For example, we automated our DDOS scrubbing based on NetFlow data‑from Kentik ... and Python scripts that react when a trigger or threshold is reached," he notes. "But it's far from [reaching] the true AI of making smart decisions based on learning the impacts of the last decision."

Andrew Wertkin, CTO of BlueCat a network technology firm, believes that IBN is far more than a technology transformation. "It also affects organization skill-sets, operations, compliance/governance and existing service level agreements." He recommends that organizations assess their readiness in all of these areas. "Don’t get over your skis," Wertkin advises. "Start small and focused."

Look before leaping to NFV

Network functions virtualization (NFV) abstracts network functions, allowing them to be installed, controlled and manipulated by software running on standardized compute nodes. "NFV and SDN free networks from their dependence on underlying physical hardware," says Bill Long, vice president of interconnection services at data center and colocation provider Equinix. Instead, network orchestration and control are managed through software, without specialized equipment confined to a specific location. "Companies can safely connect their networks to applications and cloud services wherever they are and compute resources can be turned up or down as needed," Long explains. "This greatly increases scalability and simplicity."

Troubles can arise when enterprises fail to fully think through their need for NFV. "A lot of companies are jumping into NFV without first justifying a business case," Smith says. "For example, if the business needs more flexibility at its branch locations, then using NFV to spin up new network services is a great idea."

Adoption disruptions can be minimized by ensuring clear communication between IT teams. "NFV decouples network functions from the underlying server hardware, which enables greater flexibility and elasticity," says Ashish Shah a vice president at Avi Networks, a data center and clouds application platform provider. "However, since the server team is responsible for maintaining and patching x86 servers, clear understanding and communications between the server and networking teams will be important."

Centralized policy and lifecycle management of virtual network function is another important consideration for long term success. Managing each instance of an NFV will be cumbersome and can obviate the benefits of the transition to NFV, Shah warns.

Migrating existing applications will require additional care to ensure that all application dependencies, scripts and policies are accounted for. "Taking steps to document these dependencies can reduce disruptions," Shah says.

In the bigger picture, "whether focusing on SDN, SD-WAN, IBN or NFV, it’s important to remember that with each new technology comes new tools, training, support and more," Smith says. "Ensuring your teams have the proper tools and training is critical to successful deployments and minimal disruption."

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to John Edwards (see source)

Tuesday, February 26, 2019

More Thoughts on Vendor Lock-In and Subscriptions

Albert Siersema sent me his thoughts on lock-in and the recent tendency to sell network device (or software) subscriptions instead of boxes. A few of my comments are inline.

Another trend in the industry is to convert support contracts into subscriptions. That is, the entrenched players seem to be focusing more on that business model (too). In the end, I feel the customer won't reap that many benefits, and you probably will end up paying more. But that's my old grumpy cynicism talking :)

While I agree with that, buying a subscription instead of owning a box (and deprecating it) also makes it easier to persuade the bean counters to switch the gear because there’s little residual value in existing boxes (and it’s easy to demonstrate total-cost-of-ownership). Like every decent sword this one has two blades ;)

At every customer I've always stressed to include terms like vendor agnostic, open standards, minimal nerd knobs and exit strategies in architecture and design principles. Mind you: vendor agnostic, not the stronger vendor neutral. Agnostic in my view means that one should strive for a design where equipment can be swapped out, but if you happen to choose a vendor specific technology, be sure to have an exit strategy. 

I like the idea, but unfortunately the least-common-denominator excludes “cool” features the vendors are promoting at conferences like Cisco Live or VMworld, and once the management gets hooked on the idea of “this magic technology can save the world” instead of “it’s Santa bringing me gifts every Christmas” you’re facing an uphill battle. There’s a reason there’s management/CIO track at every major $vendor conference.

And this is where the current trend worries me. Take for instance SD-Access. Although I'm sure some genuine thought has gone into the development of the technology, what I see is a complicated stack of technologies and interwoven components, ever more exposed as a magic black box. And in the process, the customer is converted from one business model to the other (subscriptions). Cisco is playing strong in this field, but they're not the only vendor to do so.

There's no real interoperability and I'm wondering (I should say doubting) if the complexity is really reduced. And the dependency on a given vendor will undoubtedly result in headache and probably even down time

Formulating an exit strategy becomes ever more daunting because even with proper automation it will probably mean a rip-and-replace.

It's worse than that – every solution has its own API (every vendor will call it open, but that just means “documented”), and switching vendors often means ripping out the existing toolchain and developing (or installing) a new one.

Obviously there are intent-based vendors claiming how they can solve the problem by adding another layer of abstraction. Please read RFC 1925 and The ABC of Lock-In before listening to their presentations.

In the software development world I see an ever expanding field of options and rapid innovation, lots of them based on open source. Whereas infrastructure seems to be collapsing into fewer options. 

A lot of that is “the grass is greener on the other side of the fence.” Operating system space is mostly a Linux monoculture with Windows fading and OSX/IOS having a small market share. Most everyone is using a MySQL clone as their relational database (kudos to the few Postgress users). If you want to run a web server, you can choose between Apache or Nginx. There are a gazillion programming languages, but the top five haven’t really changed in the last 10 years.

The ever-expanding field of options might also be a mirage. As anyone evaluating open-source automation tools and libraries quickly realizes, there’s a ton of them, but most of them are either semi-abandoned, unsupported, developed for a specific use case, not fit for use, or working on a platform you’re not comfortable with.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

Western Digital launches SSDs for different enterprise use cases

Last week I highlighted a pair of ARM processors with very different use cases, and now the pattern repeats as Western Digital, a company synonymous with hard-disk technology, introduces a pair of SSDs for markedly different use.

The Western Digital Ultrastar DC SN630 NVMe SSD and the Western Digital CL SN720 NVMe SSD both sport internally developed controller and firmware architectures, 64-layer 3D NAND technology and a NVMe interface, but that’s about where they end.

The SN630 is a read-intensive drive capable of two disk writes per day, which means it has the performance to write the full capacity of the drive two times per day. So a 1TB version can write up two 2TB per day. But these drives are smaller capacity, as WD traded capacity for endurance.

The SN720 is a boot device optimized for edge servers and hyperscale cloud with a lot more write performance. Random write is 10x the speed of the SN630 and is optimized for fast sequential writes.

Both use NVMe, which is predicted to replace the ageing SATA interface. SATA was first developed around the turn of the century as a replacement for the IDE interface and has its legacy in hard disk technology. It is the single biggest bottleneck in SSD performance.

NVMe uses the much faster PCI Express protocol, which is much more parallel and has better error recovery. Rather than squeeze any more life out of SATA, the industry is moving to NVMe in a big way at the expense of SATA. IDC predicts SATA product sales peaked in 2017 at around $5 billion and are headed to $1 billion by 2022. PCIe-based drives, on the other hand, will skyrocket from around $3.5 billion in 2017 to almost $20 billion by 2022.

So the new SSDs are replacement not only for older 15k RPM hard disks but SATA drives as well with specific use cases.

“The data landscape is rapidly changing. More devices are coming online and IoT is trying to connect everything together. The world is moving from on-prem to the cloud but also from the core cloud to the edge,” said Clint Ludeman, senior manager for product marketing for data center devices at Western Digital.

Reducing latency is becoming an issue, whether it’s a roundtrip from device to data center and back again or reducing latency at the core of the data center. “Big Data opens up opportunities, but then you gotta do something with data. Fast SSDs allow you to do something with the data,” he said.

That’s where the targeted products come into play. “We do see a shift from traditional server architecture, where an OEM would throw a server out there and not know what was running on it. Now we’re seeing a case where customers know their workloads and know their bottlenecks. That’s how we’re designing purpose-built products. As software stacks mature, you see different bottlenecks. It’s a continual thing we’re chasing,” he said.

And by going to NVMe they are able to reduce latency in the software stack to microseconds rather than the milliseconds that a hard disk works on. “We would have performance bottlenecks you couldn’t unlock with SATA or SAS interfaces. Now we can do real-time computing,” said Ludeman.

The CL SN720 is shipping now. The Ultrastar DC SN630 SSD is currently sampling with select customers with broad availability expected in April.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Andy Patrizio (see source)

Western Digital launches SSDs for different enterprise use cases

Last week I highlighted a pair of ARM processors with very different use cases, and now the pattern repeats as Western Digital, a company synonymous with hard-disk technology, introduces a pair of SSDs for markedly different use.

The Western Digital Ultrastar DC SN630 NVMe SSD and the Western Digital CL SN720 NVMe SSD both sport internally developed controller and firmware architectures, 64-layer 3D NAND technology and a NVMe interface, but that’s about where they end.

The SN630 is a read-intensive drive capable of two disk writes per day, which means it has the performance to write the full capacity of the drive two times per day. So a 1TB version can write up two 2TB per day. But these drives are smaller capacity, as WD traded capacity for endurance.

The SN720 is a boot device optimized for edge servers and hyperscale cloud with a lot more write performance. Random write is 10x the speed of the SN630 and is optimized for fast sequential writes.

Both use NVMe, which is predicted to replace the ageing SATA interface. SATA was first developed around the turn of the century as a replacement for the IDE interface and has its legacy in hard disk technology. It is the single biggest bottleneck in SSD performance.

NVMe uses the much faster PCI Express protocol, which is much more parallel and has better error recovery. Rather than squeeze any more life out of SATA, the industry is moving to NVMe in a big way at the expense of SATA. IDC predicts SATA product sales peaked in 2017 at around $5 billion and are headed to $1 billion by 2022. PCIe-based drives, on the other hand, will skyrocket from around $3.5 billion in 2017 to almost $20 billion by 2022.

So the new SSDs are replacement not only for older 15k RPM hard disks but SATA drives as well with specific use cases.

“The data landscape is rapidly changing. More devices are coming online and IoT is trying to connect everything together. The world is moving from on-prem to the cloud but also from the core cloud to the edge,” said Clint Ludeman, senior manager for product marketing for data center devices at Western Digital.

Reducing latency is becoming an issue, whether it’s a roundtrip from device to data center and back again or reducing latency at the core of the data center. “Big Data opens up opportunities, but then you gotta do something with data. Fast SSDs allow you to do something with the data,” he said.

That’s where the targeted products come into play. “We do see a shift from traditional server architecture, where an OEM would throw a server out there and not know what was running on it. Now we’re seeing a case where customers know their workloads and know their bottlenecks. That’s how we’re designing purpose-built products. As software stacks mature, you see different bottlenecks. It’s a continual thing we’re chasing,” he said.

And by going to NVMe they are able to reduce latency in the software stack to microseconds rather than the milliseconds that a hard disk works on. “We would have performance bottlenecks you couldn’t unlock with SATA or SAS interfaces. Now we can do real-time computing,” said Ludeman.

The CL SN720 is shipping now. The Ultrastar DC SN630 SSD is currently sampling with select customers with broad availability expected in April.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Andy Patrizio (see source)

What to know about planning mobile edge systems (MEC)

ICANN urges adopting DNSSEC now

Powerful malicious actors continue to be a substantial risk to key parts of the Internet and its Domain Name System security infrastructure, so much so that The Internet Corporation for Assigned Names and Numbers is calling for an intensified community effort to install stronger DNS security technology. 

Specifically ICANN is calling for full deployment of the Domain Name System Security Extensions (DNSSEC) across all unsecured domain names. DNS,often called the internet’s phonebook, is part of the global internet infrastructure that translates between common language domain names and IP addresses that computers need to access websites or send emails.  DNSSEC adds a layer of security on top of DNS.

DNSSEC technologies have been around since about 2010 but are not widely deployed, with less than 20 percent of the world’s DNS registrars having deployed it, according to the Regional Internet address Registry for the Asia-Pacific region (APNIC).

DNSSEC adoption has been lagging because it was viewed as optional and can require a tradeoff between security and functionality said Kris Beevers, co-founder and CEO of DNS vendor NS1.

DNSSEC prevents attacks that can compromise the integrity of answers to DNS queries by cryptographically signing DNS records to verify their authenticity, Beevers said.

“However, most implementations are incompatible with modern DNS requirements, including redundant DNS setups or dynamic responses from DNS-based traffic-management features,” Beevers said. “Legacy DNSSEC implementations break even basic functions, such as geo-routing, and is hard to implement across multiple vendors, which means poor performance and reduced availability for end users.”

Full deployment of DNSSEC ensures end users are connecting to the actual web site or other service corresponding to a particular domain name, ICANN says “Although this will not solve all the security problems of the Internet, it does protect a critical piece of it – the directory lookup – complementing other technologies such as SSL (https:) that protect the "conversation", and provide a platform for yet-to-be-developed security improvements,” ICANN says.

In a release calling for the increased use of DNSSEC technologies, ICANN noted that recent public reports show a pattern of multifaceted attacks utilizing different methodologies.

“Some of the attacks target the DNS, in which unauthorized changes to the delegation structure of domain names are made, replacing the addresses of intended servers with addresses of machines controlled by the attackers. This particular type of attack, which targets the DNS, only works when DNSSEC is not in use,” ICANN stated.

“Enterprises that are potential targets – in particular those that capture or expose user and enterprise data through their applications – should heed this warning by ICANN and should pressure their DNS and registrar vendors to make DNSSEC and other domain-security best practices easy to implement and standardized. They can easily implement DNSSEC signing and other domain security best practices with technologies in the market today,” Beevers said.  At the very least, they should work with their vendors and security teams to audit their implementations with respect to ICANN's checklist and other best practices, such as DNS delivery network redundancy to protect against DDoS attacks targeting DNS infrastructure, Beevers stated.

ICANN is an organization that typically thinks in decades, so the immediacy of the language – "alert", "ongoing and significant risk" – is telling. They believe it is critical for the ecosystem, industry and consumers of domain infrastructure to take urgent action to ensure DNSSEC signing of all unsigned domains, Beevers said.

“ICANN's direction drives broader policy decisions and actions for other regulatory bodies, and just as importantly, for major technology players in the ecosystem,” Beevers said.  “We are likely to see pressure from major technology players like browser vendors, ISPs and others to drive behavioral change in the application-delivery ecosystem to incentivize these changes. “

ICANN’s warning comes on the heels of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) warning in January that all federal agencies should bolt down their Domain Name System in the face of a series of global hacking campaigns.

CISA said in its emergency directive that it is tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive-branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”

CISA says that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records.  Then the attacker alters DNS records, like address, mail exchange or name-server, replacing the legitimate address of the services with an address the attacker controls.

These actions let the attacker direct user traffic to their own infrastructure for manipulation or inspection before passing it on to the legitimate service, should they choose. This creates a risk that persists beyond the period of traffic redirection, CISA stated. 

CISA noted that FireEye and Cisco Talos researchers had reported that malicious actors obtained access to accounts that controlled DNS records and made them resolve to their own infrastructure before relaying it to the real address. Because they could control an organization’s DNS, they could obtain legitimate digital certificates and decrypt the data they intercepted – all while everything looked normal to users.

ICANN offered a checklist of recommended security precautions that  members of the domain-name industry, registries, registrars, resellers and related others shoudl take to protect their systems, their customers’ systems and information reachable via the DNS.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

Linux security: Cmd provides visibility, control over user activity

BrandPost: How edge computing will bring business to the next level

What do embedded sensors, ecommerce sites, social media platforms, and streaming services have in common? They all produce vast volumes of data, much of which travels across the internet. In fact, Cisco estimates global IP traffic will grow to 3.3 zettabytes annually by 2021 – up three times compared to internet traffic in 2017.

For many businesses, these data packets represent treasure troves of actionable information, from customers’ buying preferences to new market trends. But as the volume and velocity of data increases, so too does the inefficiency of transmitting all this information to a cloud or data center for processing.

Simply put, the internet isn’t designed to factor in how long any given package will take to reach its destination. To complicate matters, that video of your employee’s new nephew is travelling over the exact same network as business-critical data. The result: network latency, costly network bandwidth, and data storage, security and compliance challenges. That’s especially true in the case of delay-sensitive traffic such as voice and video.

Getting to the source

No wonder businesses are increasingly turning to the edge to solve challenges in cloud infrastructure. Edge data centers work by bringing bandwidth-intensive content closer to the end user, and latency-sensitive applications closer to the data. Types of edge computing vary, including local devices, localized data centers, and regional data centers. But the objective is the same: to place computing power and storage capabilities directly on the edge of the network.

In fact, IDC research predicts in 3 years that 45% of IoT-created data will be stored, processed, analyzed, and acted upon close to, or at the edge of, the network and over 6 billion devices will be connected to the edge computing solution.

From a technology perspective, edge computing delivers a number of key advantages. By transforming cloud computing into a more distributed computing cloud architecture, disruptions are limited to only one point in the network.

For instance, if a wind storm or cyberattack causes a power outage, its impact would be limited to the edge computing device and the local applications on that device rather than the entire network. And because edge data centers are nearby, companies can achieve higher bandwidth, lower latency, and regulatory compliance around location and data privacy. The result is the same degree of reliability and performance as large data centers.

Edge computing also delivers business benefits to a wide variety of industries. Today’s retailers rely on a 24/7 online presence to deliver superior customer experiences. Edge computing can prevent sites outages and increase availability for optimal site up-time. By ensuring factory floor operators stay connected to plant systems, edge computing can improve a manufacturer’s operational efficiencies. And for healthcare practitioners, edge computing can place computing close to a device, such as a heart-rate monitor, ensuring reliable access to possibly life-saving, health-related data.

The speed at which the world is generating data shows no signs of slowing down. As volumes mount, edge computing is becoming more than an IT necessity; it’s a critical competitive advantage in the future.

To discover how edge computing can deliver technology and business benefits to your organization, visit APC.com

Let's block ads! (Why?)


Thanks to Brand Post (see source)

Merger Complete: Appeals Court Rejects Bid to Throw Out AT&T-Time Warner Merger

The merger of AT&T and Time Warner (Entertainment) is safe.

A federal appeals court in Washington handed the U.S. Department of Justice its worst defeat in 40 years as federal regulators fought to oppose a huge “vertical” merger among two unrelated companies.

In a one page, two-sentence ruling, a three judge panel affirmed the lower District of Columbia Circuit Court decision that approved the $80 billion merger without conditions. In the lower court, Judge Richard Leon ruled there was no evidence AT&T would use the merger to unfairly restrict competition, a decision that was scorned by Justice Department lawyers and consumer groups, both claiming the merger would allow AT&T to raise prices and restrict or impede competitors’ access to AT&T-owned networks.

In this short one-page ruling, a three judge panel of the D.C. Court of Appeals sustained a lower court’s decision to allow the merger of AT&T and Time Warner, Inc. without any deal conditions.

The Justice Department and its legal team seemed to repeatedly irritate Judge Leon in the lower court during arguments in 2018, making it an increasingly uphill battle for the anti-merger side to win.

Judge Leon

Unsealed transcripts of confidential bench conferences with the attorneys arguing the case, made public in August 2018, showed Department of Justice lawyers repeatedly losing rulings:

  • Judge Leon complained that the Justice Department used younger lawyers to question top company executives, leading to this remarkable concession by DoJ Attorney Craig Conrath: “I want to tell you that we’ve listened very carefully and appreciate your comments, and over the course of this week and really the rest of the trial, you’ll be seeing more very gray hair, your honor.”
  • Leon grew bored with testimony from Cox Communications that suggested Cox would be forced to pay more for access to Turner Networks. Leon told the Cox executive to leave the stand and demanded to know “why is this witness here?”
  • Leon limited what Justice lawyers could question AT&T CEO Randall Stephenson about regarding AT&T’s submissions to the FCC.
  • Justice Department lawyer Eric Walsh received especially harsh treatment by Judge Leon after the Justice lawyer tried to question a Turner executive about remarks made in an on-air interview with CNBC in 2016. Leon told Walsh he already ruled that question out of order and warned, “don’t pull that kind of crap again in this courtroom.”

During the trial, AT&T managed to slip in the fact one of its lawyers was making a generous contribution towards the unveiling of an official portrait of Judge Leon, while oddly suggesting the contribution was totally anonymous.

“One of our lawyers on our team was asked to make an anonymous contribution to a fund for the unveiling of your portrait,” AT&T lawyer Daniel Petrocelli told the judge. “He would like to do so and I cleared it with Mr. Conrath, but it’s totally anonymous.”

Leon responded he had no problem with that, claiming “I don’t even know who gives anything.”

AT&T also attempted to argue that the Justice Department case against the merger was prompted by public objections to the merger by President Trump, who promised to block the deal if he won the presidency. That clearly will not happen any longer, and it is unlikely the Justice Department will make any further efforts to block the deal.

AT&T received initial approval of its merger back in June and almost immediately proceeded integrating the two companies as if the Justice Department appeal did not exist. The Justice Department can still attempt to appeal today’s decision to the U.S. Supreme Court, something AT&T hopes the DoJ will not attempt.

“While we respect the important role that the U.S. Department of Justice plays in the merger review process, we trust that today’s unanimous decision from the D.C. Circuit will end this litigation,” AT&T said in a statement.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Linux security: Cmd provides visibility, control over user activity

BrandPost: How edge computing will bring business to the next level

What do embedded sensors, ecommerce sites, social media platforms, and streaming services have in common? They all produce vast volumes of data, much of which travels across the internet. In fact, Cisco estimates global IP traffic will grow to 3.3 zettabytes annually by 2021 – up three times compared to internet traffic in 2017.

For many businesses, these data packets represent treasure troves of actionable information, from customers’ buying preferences to new market trends. But as the volume and velocity of data increases, so too does the inefficiency of transmitting all this information to a cloud or data center for processing.

Simply put, the internet isn’t designed to factor in how long any given package will take to reach its destination. To complicate matters, that video of your employee’s new nephew is travelling over the exact same network as business-critical data. The result: network latency, costly network bandwidth, and data storage, security and compliance challenges. That’s especially true in the case of delay-sensitive traffic such as voice and video.

Getting to the source

No wonder businesses are increasingly turning to the edge to solve challenges in cloud infrastructure. Edge data centers work by bringing bandwidth-intensive content closer to the end user, and latency-sensitive applications closer to the data. Types of edge computing vary, including local devices, localized data centers, and regional data centers. But the objective is the same: to place computing power and storage capabilities directly on the edge of the network.

In fact, IDC research predicts in 3 years that 45% of IoT-created data will be stored, processed, analyzed, and acted upon close to, or at the edge of, the network and over 6 billion devices will be connected to the edge computing solution.

From a technology perspective, edge computing delivers a number of key advantages. By transforming cloud computing into a more distributed computing cloud architecture, disruptions are limited to only one point in the network.

For instance, if a wind storm or cyberattack causes a power outage, its impact would be limited to the edge computing device and the local applications on that device rather than the entire network. And because edge data centers are nearby, companies can achieve higher bandwidth, lower latency, and regulatory compliance around location and data privacy. The result is the same degree of reliability and performance as large data centers.

Edge computing also delivers business benefits to a wide variety of industries. Today’s retailers rely on a 24/7 online presence to deliver superior customer experiences. Edge computing can prevent sites outages and increase availability for optimal site up-time. By ensuring factory floor operators stay connected to plant systems, edge computing can improve a manufacturer’s operational efficiencies. And for healthcare practitioners, edge computing can place computing close to a device, such as a heart-rate monitor, ensuring reliable access to possibly life-saving, health-related data.

The speed at which the world is generating data shows no signs of slowing down. As volumes mount, edge computing is becoming more than an IT necessity; it’s a critical competitive advantage in the future.

To discover how edge computing can deliver technology and business benefits to your organization, visit APC.com

Let's block ads! (Why?)


Thanks to Brand Post (see source)

BrandPost: How is 802.11ax different than the previous wireless standards?

BrandPost: The Network Gets Personal

In a competitive environment, with so much emphasis on the need for communications service providers (CSPs) to offer more personalized services to increase customer loyalty, the network plays a crucial role, explains Kent McNeil, Vice President of Software for Blue Planet. While the connection between network infrastructure and the customer relationship isn’t obvious, it is actually what drives personalization of services and competitive edge.

Enhancing the customer experience and lowering churn rates are key objectives for CSPs; however, an influx of competition is challenging customer loyalty. Equally, leading-edge technologies, from devices to cloud, have created new visions for consumers and enterprises. This has significantly changed customer demands, as well as expectations about how those requirements are fulfilled. Customers expect more personalized services, with tailored offers and ease-of-use.

Are CSPs considering the whole suite of tools at their disposal when it comes to gaining customer satisfaction? Managing the front end with an easy-to-use CRM system and providing responsive customer care at key touch points are critical. So too is ensuring the billing system is accurate and reliable. These components are obvious. Yet the network has an equally important role in keeping customers satisfied.

Data feeds personalization

For the service provider, data is like gold dust. Without data, personalization of services is not possible. Data is what feeds the knowledge that enables operators to design policies, and it’s these policies that govern operational support systems (OSS) to achieve the most effective outcomes.

The network is the source of all this data. For personalization to be successful, CSPs need to collect data in real time about what services their customers are accessing and how they’re using those services. This insight builds a picture of customers’ interests and behaviors, allowing CSPs to anticipate future needs and create tailored services and offerings. It also allows a CSP to stand out among its competitors and increase customer loyalty and retention.

Too often, the first time a CSP knows about an unhappy customer is when they churn out. Loyalty is very low in the telecoms sector. If a CSP can differentiate and offer customers what they need, when they need it, customers will remain happy.

The network underpins data

Yet without ensuring that the network can effectively and efficiently deliver this data to back-end OSS, personalization will either fail or will not be speedy enough for the customer. As more automated systems are integrated to cope with the increase and variety of network traffic, solutions must be employed to intelligently monitor and control the network.

Control of the network will play a fundamental part for CSPs as the market continues to expand with web-scale or niche competitors. Owning that powerful data source and network infrastructure—and mining the data—will give CSPs an advantage over new players.

The federation advantage

High-quality delivery of a variety of services is essential to be competitive, yet long-term customer satisfaction can only be achieved if the network is reinforcing the positive customer experience that is established at the onset.

Many legacy network systems are constraining the “ultimate customer experience.” Silos in the back end mean that although the customer receives one package through one account, several manual steps are required to be completed, across multiple OSS, to deliver any chosen service. This fragmented approach results in service delivery inefficiencies, since numerous time-consuming operations are required to fulfil the said service. Silos between CSPs further add to the complexity of managing long-reach, site-to-cloud services.

The same challenges exist for ongoing management and assurance of services: time-consuming, error-prone, manual operations. With many different systems, it is difficult and cumbersome for a CSP to effectively monitor all systems in real time for ongoing quality assurance. This means faults and issues are often overlooked and only recognized when the customer complains. Network issues are not easily isolated, identified, or resolved.

Many CSPs face time and cost constraints when it comes to a complete overhaul of OSS, but this is not a necessity. The problem with existing systems is the lack of synchronization across the end-to-end infrastructure. By implementing a unifying system that overlays legacy systems, one can federate important monitoring data to ensure optimization of network performance and enable more efficient utilization of network resources in the delivery of new services. This unified approach enables rapid, accurate delivery and assurance of tailored services, and is a critical stepping stone to future success.

The network transforms the customer experience

Digital transformation is the in-vogue industry buzzword, but it’s not a meaningless term. It signals that the network is evolving to meet the dynamic needs of the new digitally focused and highly knowledgeable customer. This focus on the customer at the center is vital to CSPs’ success in this digital age—they need to ensure that network infrastructure is capable of supporting evolving customer demands, now and in the future. By leveraging their valuable core network infrastructure for data-driven, federated, streamlined operations, they are extremely well positioned to transform the customer experience.

Learn more about Blue Planet network orchestration here.

Let's block ads! (Why?)


Thanks to Brand Post (see source)