Thursday, February 28, 2019

Frontier: Losing Customers While Raising Prices; Company Loses $643 Million in 2018

In the last three months of 2018, Frontier Communications reported it said goodbye to 67,000 broadband customers, lost $643 million in revenue year-over-year, and had to write down the value of its assets and business by $241 million, as the company struggles with a deteriorating copper wire network in many states where it operates.
But Wall Street was pleased the company’s latest quarterly results were not worse, and helped lift Frontier’s stock from $2.42 to $2.96 this afternoon, still down considerably from the $125 a share price the company commanded just four years ago.
[...]

Thanks to Phillip Dampier, see the full article: source

Cisco warns a critical patch is needed for a remote access firewall, VPN and router

Cisco is warning organizations with remote users that have deployed a particular Cisco wireless firewall, VPN and router to patch a critical vulnerability in each that could let attackers break into the network.
The vulnerability, which has an impact rating of 9.8 out of 10 on the Common Vulnerability Scoring System lets a potential attacker send malicious HTTP requests to a targeted device. A successful exploit could let the attacker execute arbitrary code on the underlying operating system of the affected device as a high-privilege user, Cisco stated.
[...]
Thanks to Michael Cooney, see the full article: source

New chemistry-based data storage would blow Moore’s Law out of the water

Molecular electronics, where charges move through tiny, sole molecules, could be the future of computing and, in particular, storage, some scientists say.
Researchers at Arizona State University (ASU) point out that a molecule-level computing technique, if its development succeeds, would slam Gordon Moore’s 1965 prophesy — Moore's Law — that the number of transistors on a chip will double every year, and thus allow electronics to get proportionally smaller. In this case, hardware, including transistors, will conceivably fit on individual molecules, reducing chip sizes much more significantly than Moore ever envisaged.
[...]
Thanks to Patrick Nelson, see the full article: source

BrandPost: Why Data Center Management Responsibilities Must Include Edge Data Centers

Now that edge computing has emerged as a major trend, the question for enterprises becomes how to migrate the data center management expertise acquired over many years to these new, remote environments.
Enterprise data centers have long provided a strong foundation for growth.  They enable businesses to respond more quickly to market demands. However, this agility is heavily dependent on the reliability and manageability of the data center.  As data center operational complexity increases, maintaining uptime while minimizing costs becomes a bigger challenge.
[...]

Thanks to Brand Post, see the full article: source

7 Photos of Equinix Los Angeles – The Center of Interconnection & Entertainment

Why the industrial IoT is more important than consumer devices — and 7 more surprising IoT trends

Given the Internet of Things’ (IoT) perch atop the hype cycle, IoT trend-spotting has become a full-time business, not just an end-of-the-year pastime. It seems every major — and minor — IoT player is busy laying out its vision of where the technology is going. Most of them harp on the same themes, of course, from massive growth to security vulnerabilities to skills shortages.
[...]
Thanks to Fredric Paul, see the full article: source

Wednesday, February 27, 2019

Sample Solution: Automating L3VPN Deployments

A long while ago I published my solution for automated L3VPN provisioning… and I’m really glad I can point you to a much better one ;)

Håkon Rørvik Aune decided to tackle the same challenge as his hands-on assignment in the Building Network Automation Solutions course and created a nicely-structured and well-documented solution (after creating a playbook that creates network diagrams from OSPF neighbor information).

[...]
Thanks to Ivan Pepelnjak, see the full article: source)

VMware preps milestone NSX release for enterprise-cloud push

Struggling Dish’s Sling TV Cuts Prices 40% for First 90 Days

Sling TV, one of the first online streaming alternatives to cable television, is slashing prices by 40% for the first three months to attract more subscribers.

Sling’s basic plans are now priced at $15 a month, with more deluxe tiers available for $25 a month for new customers.

[...]
Thanks to Phillip Dampier, see the full article: source

Protecting the IoT: 3 things you must include in an IoT security plan

With many IT projects, security is often an afterthought, but that approach puts the business at significant risk. The rise of IoT adds orders of magnitude more devices to a network, which creates many more entry points for threat actors to breach. A bigger problem is that many IoT devices are easier to hack than traditional IT devices, making them the endpoint of choice for the bad guys.

[...]
Thanks to Zeus Kerravala, see the full article: source

The big picture: Is IoT in the enterprise about making money or saving money?

Data Gravity and Cloud Security

How to move to a disruptive network technology with minimal disruption

Disruptive network technologies are great—at least until they threaten to disrupt essential everyday network services and activities. That's when it's time to consider how innovations such as SDN, SD-WAN, intent-based networking (IBN) and network functions virtualization (NFV) can be transitioned into place without losing a beat.

[...]
Thanks to John Edwards, see the full article: source

Tuesday, February 26, 2019

More Thoughts on Vendor Lock-In and Subscriptions

Albert Siersema sent me his thoughts on lock-in and the recent tendency to sell network device (or software) subscriptions instead of boxes. A few of my comments are inline.

Another trend in the industry is to convert support contracts into subscriptions. That is, the entrenched players seem to be focusing more on that business model (too). In the end, I feel the customer won't reap that many benefits, and you probably will end up paying more. But that's my old grumpy cynicism talking :)

While I agree with that, buying a subscription instead of owning a box (and deprecating it) also makes it easier to persuade the bean counters to switch the gear because there’s little residual value in existing boxes (and it’s easy to demonstrate total-cost-of-ownership). Like every decent sword this one has two blades ;)

At every customer I've always stressed to include terms like vendor agnostic, open standards, minimal nerd knobs and exit strategies in architecture and design principles. Mind you: vendor agnostic, not the stronger vendor neutral. Agnostic in my view means that one should strive for a design where equipment can be swapped out, but if you happen to choose a vendor specific technology, be sure to have an exit strategy. 

I like the idea, but unfortunately the least-common-denominator excludes “cool” features the vendors are promoting at conferences like Cisco Live or VMworld, and once the management gets hooked on the idea of “this magic technology can save the world” instead of “it’s Santa bringing me gifts every Christmas” you’re facing an uphill battle. There’s a reason there’s management/CIO track at every major $vendor conference.

And this is where the current trend worries me. Take for instance SD-Access. Although I'm sure some genuine thought has gone into the development of the technology, what I see is a complicated stack of technologies and interwoven components, ever more exposed as a magic black box. And in the process, the customer is converted from one business model to the other (subscriptions). Cisco is playing strong in this field, but they're not the only vendor to do so.

There's no real interoperability and I'm wondering (I should say doubting) if the complexity is really reduced. And the dependency on a given vendor will undoubtedly result in headache and probably even down time

Formulating an exit strategy becomes ever more daunting because even with proper automation it will probably mean a rip-and-replace.

It's worse than that – every solution has its own API (every vendor will call it open, but that just means “documented”), and switching vendors often means ripping out the existing toolchain and developing (or installing) a new one.

Obviously there are intent-based vendors claiming how they can solve the problem by adding another layer of abstraction. Please read RFC 1925 and The ABC of Lock-In before listening to their presentations.

In the software development world I see an ever expanding field of options and rapid innovation, lots of them based on open source. Whereas infrastructure seems to be collapsing into fewer options. 

A lot of that is “the grass is greener on the other side of the fence.” Operating system space is mostly a Linux monoculture with Windows fading and OSX/IOS having a small market share. Most everyone is using a MySQL clone as their relational database (kudos to the few Postgress users). If you want to run a web server, you can choose between Apache or Nginx. There are a gazillion programming languages, but the top five haven’t really changed in the last 10 years.

The ever-expanding field of options might also be a mirage. As anyone evaluating open-source automation tools and libraries quickly realizes, there’s a ton of them, but most of them are either semi-abandoned, unsupported, developed for a specific use case, not fit for use, or working on a platform you’re not comfortable with.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

Western Digital launches SSDs for different enterprise use cases

Last week I highlighted a pair of ARM processors with very different use cases, and now the pattern repeats as Western Digital, a company synonymous with hard-disk technology, introduces a pair of SSDs for markedly different use.

The Western Digital Ultrastar DC SN630 NVMe SSD and the Western Digital CL SN720 NVMe SSD both sport internally developed controller and firmware architectures, 64-layer 3D NAND technology and a NVMe interface, but that’s about where they end.

The SN630 is a read-intensive drive capable of two disk writes per day, which means it has the performance to write the full capacity of the drive two times per day. So a 1TB version can write up two 2TB per day. But these drives are smaller capacity, as WD traded capacity for endurance.

The SN720 is a boot device optimized for edge servers and hyperscale cloud with a lot more write performance. Random write is 10x the speed of the SN630 and is optimized for fast sequential writes.

Both use NVMe, which is predicted to replace the ageing SATA interface. SATA was first developed around the turn of the century as a replacement for the IDE interface and has its legacy in hard disk technology. It is the single biggest bottleneck in SSD performance.

NVMe uses the much faster PCI Express protocol, which is much more parallel and has better error recovery. Rather than squeeze any more life out of SATA, the industry is moving to NVMe in a big way at the expense of SATA. IDC predicts SATA product sales peaked in 2017 at around $5 billion and are headed to $1 billion by 2022. PCIe-based drives, on the other hand, will skyrocket from around $3.5 billion in 2017 to almost $20 billion by 2022.

So the new SSDs are replacement not only for older 15k RPM hard disks but SATA drives as well with specific use cases.

“The data landscape is rapidly changing. More devices are coming online and IoT is trying to connect everything together. The world is moving from on-prem to the cloud but also from the core cloud to the edge,” said Clint Ludeman, senior manager for product marketing for data center devices at Western Digital.

Reducing latency is becoming an issue, whether it’s a roundtrip from device to data center and back again or reducing latency at the core of the data center. “Big Data opens up opportunities, but then you gotta do something with data. Fast SSDs allow you to do something with the data,” he said.

That’s where the targeted products come into play. “We do see a shift from traditional server architecture, where an OEM would throw a server out there and not know what was running on it. Now we’re seeing a case where customers know their workloads and know their bottlenecks. That’s how we’re designing purpose-built products. As software stacks mature, you see different bottlenecks. It’s a continual thing we’re chasing,” he said.

And by going to NVMe they are able to reduce latency in the software stack to microseconds rather than the milliseconds that a hard disk works on. “We would have performance bottlenecks you couldn’t unlock with SATA or SAS interfaces. Now we can do real-time computing,” said Ludeman.

The CL SN720 is shipping now. The Ultrastar DC SN630 SSD is currently sampling with select customers with broad availability expected in April.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Andy Patrizio (see source)

Western Digital launches SSDs for different enterprise use cases

Last week I highlighted a pair of ARM processors with very different use cases, and now the pattern repeats as Western Digital, a company synonymous with hard-disk technology, introduces a pair of SSDs for markedly different use.

The Western Digital Ultrastar DC SN630 NVMe SSD and the Western Digital CL SN720 NVMe SSD both sport internally developed controller and firmware architectures, 64-layer 3D NAND technology and a NVMe interface, but that’s about where they end.

The SN630 is a read-intensive drive capable of two disk writes per day, which means it has the performance to write the full capacity of the drive two times per day. So a 1TB version can write up two 2TB per day. But these drives are smaller capacity, as WD traded capacity for endurance.

The SN720 is a boot device optimized for edge servers and hyperscale cloud with a lot more write performance. Random write is 10x the speed of the SN630 and is optimized for fast sequential writes.

Both use NVMe, which is predicted to replace the ageing SATA interface. SATA was first developed around the turn of the century as a replacement for the IDE interface and has its legacy in hard disk technology. It is the single biggest bottleneck in SSD performance.

NVMe uses the much faster PCI Express protocol, which is much more parallel and has better error recovery. Rather than squeeze any more life out of SATA, the industry is moving to NVMe in a big way at the expense of SATA. IDC predicts SATA product sales peaked in 2017 at around $5 billion and are headed to $1 billion by 2022. PCIe-based drives, on the other hand, will skyrocket from around $3.5 billion in 2017 to almost $20 billion by 2022.

So the new SSDs are replacement not only for older 15k RPM hard disks but SATA drives as well with specific use cases.

“The data landscape is rapidly changing. More devices are coming online and IoT is trying to connect everything together. The world is moving from on-prem to the cloud but also from the core cloud to the edge,” said Clint Ludeman, senior manager for product marketing for data center devices at Western Digital.

Reducing latency is becoming an issue, whether it’s a roundtrip from device to data center and back again or reducing latency at the core of the data center. “Big Data opens up opportunities, but then you gotta do something with data. Fast SSDs allow you to do something with the data,” he said.

That’s where the targeted products come into play. “We do see a shift from traditional server architecture, where an OEM would throw a server out there and not know what was running on it. Now we’re seeing a case where customers know their workloads and know their bottlenecks. That’s how we’re designing purpose-built products. As software stacks mature, you see different bottlenecks. It’s a continual thing we’re chasing,” he said.

And by going to NVMe they are able to reduce latency in the software stack to microseconds rather than the milliseconds that a hard disk works on. “We would have performance bottlenecks you couldn’t unlock with SATA or SAS interfaces. Now we can do real-time computing,” said Ludeman.

The CL SN720 is shipping now. The Ultrastar DC SN630 SSD is currently sampling with select customers with broad availability expected in April.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Andy Patrizio (see source)

What to know about planning mobile edge systems (MEC)

ICANN urges adopting DNSSEC now

Powerful malicious actors continue to be a substantial risk to key parts of the Internet and its Domain Name System security infrastructure, so much so that The Internet Corporation for Assigned Names and Numbers is calling for an intensified community effort to install stronger DNS security technology. 

Specifically ICANN is calling for full deployment of the Domain Name System Security Extensions (DNSSEC) across all unsecured domain names. DNS,often called the internet’s phonebook, is part of the global internet infrastructure that translates between common language domain names and IP addresses that computers need to access websites or send emails.  DNSSEC adds a layer of security on top of DNS.

DNSSEC technologies have been around since about 2010 but are not widely deployed, with less than 20 percent of the world’s DNS registrars having deployed it, according to the Regional Internet address Registry for the Asia-Pacific region (APNIC).

DNSSEC adoption has been lagging because it was viewed as optional and can require a tradeoff between security and functionality said Kris Beevers, co-founder and CEO of DNS vendor NS1.

DNSSEC prevents attacks that can compromise the integrity of answers to DNS queries by cryptographically signing DNS records to verify their authenticity, Beevers said.

“However, most implementations are incompatible with modern DNS requirements, including redundant DNS setups or dynamic responses from DNS-based traffic-management features,” Beevers said. “Legacy DNSSEC implementations break even basic functions, such as geo-routing, and is hard to implement across multiple vendors, which means poor performance and reduced availability for end users.”

Full deployment of DNSSEC ensures end users are connecting to the actual web site or other service corresponding to a particular domain name, ICANN says “Although this will not solve all the security problems of the Internet, it does protect a critical piece of it – the directory lookup – complementing other technologies such as SSL (https:) that protect the "conversation", and provide a platform for yet-to-be-developed security improvements,” ICANN says.

In a release calling for the increased use of DNSSEC technologies, ICANN noted that recent public reports show a pattern of multifaceted attacks utilizing different methodologies.

“Some of the attacks target the DNS, in which unauthorized changes to the delegation structure of domain names are made, replacing the addresses of intended servers with addresses of machines controlled by the attackers. This particular type of attack, which targets the DNS, only works when DNSSEC is not in use,” ICANN stated.

“Enterprises that are potential targets – in particular those that capture or expose user and enterprise data through their applications – should heed this warning by ICANN and should pressure their DNS and registrar vendors to make DNSSEC and other domain-security best practices easy to implement and standardized. They can easily implement DNSSEC signing and other domain security best practices with technologies in the market today,” Beevers said.  At the very least, they should work with their vendors and security teams to audit their implementations with respect to ICANN's checklist and other best practices, such as DNS delivery network redundancy to protect against DDoS attacks targeting DNS infrastructure, Beevers stated.

ICANN is an organization that typically thinks in decades, so the immediacy of the language – "alert", "ongoing and significant risk" – is telling. They believe it is critical for the ecosystem, industry and consumers of domain infrastructure to take urgent action to ensure DNSSEC signing of all unsigned domains, Beevers said.

“ICANN's direction drives broader policy decisions and actions for other regulatory bodies, and just as importantly, for major technology players in the ecosystem,” Beevers said.  “We are likely to see pressure from major technology players like browser vendors, ISPs and others to drive behavioral change in the application-delivery ecosystem to incentivize these changes. “

ICANN’s warning comes on the heels of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) warning in January that all federal agencies should bolt down their Domain Name System in the face of a series of global hacking campaigns.

CISA said in its emergency directive that it is tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive-branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”

CISA says that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records.  Then the attacker alters DNS records, like address, mail exchange or name-server, replacing the legitimate address of the services with an address the attacker controls.

These actions let the attacker direct user traffic to their own infrastructure for manipulation or inspection before passing it on to the legitimate service, should they choose. This creates a risk that persists beyond the period of traffic redirection, CISA stated. 

CISA noted that FireEye and Cisco Talos researchers had reported that malicious actors obtained access to accounts that controlled DNS records and made them resolve to their own infrastructure before relaying it to the real address. Because they could control an organization’s DNS, they could obtain legitimate digital certificates and decrypt the data they intercepted – all while everything looked normal to users.

ICANN offered a checklist of recommended security precautions that  members of the domain-name industry, registries, registrars, resellers and related others shoudl take to protect their systems, their customers’ systems and information reachable via the DNS.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

Linux security: Cmd provides visibility, control over user activity

BrandPost: How edge computing will bring business to the next level

What do embedded sensors, ecommerce sites, social media platforms, and streaming services have in common? They all produce vast volumes of data, much of which travels across the internet. In fact, Cisco estimates global IP traffic will grow to 3.3 zettabytes annually by 2021 – up three times compared to internet traffic in 2017.

For many businesses, these data packets represent treasure troves of actionable information, from customers’ buying preferences to new market trends. But as the volume and velocity of data increases, so too does the inefficiency of transmitting all this information to a cloud or data center for processing.

Simply put, the internet isn’t designed to factor in how long any given package will take to reach its destination. To complicate matters, that video of your employee’s new nephew is travelling over the exact same network as business-critical data. The result: network latency, costly network bandwidth, and data storage, security and compliance challenges. That’s especially true in the case of delay-sensitive traffic such as voice and video.

Getting to the source

No wonder businesses are increasingly turning to the edge to solve challenges in cloud infrastructure. Edge data centers work by bringing bandwidth-intensive content closer to the end user, and latency-sensitive applications closer to the data. Types of edge computing vary, including local devices, localized data centers, and regional data centers. But the objective is the same: to place computing power and storage capabilities directly on the edge of the network.

In fact, IDC research predicts in 3 years that 45% of IoT-created data will be stored, processed, analyzed, and acted upon close to, or at the edge of, the network and over 6 billion devices will be connected to the edge computing solution.

From a technology perspective, edge computing delivers a number of key advantages. By transforming cloud computing into a more distributed computing cloud architecture, disruptions are limited to only one point in the network.

For instance, if a wind storm or cyberattack causes a power outage, its impact would be limited to the edge computing device and the local applications on that device rather than the entire network. And because edge data centers are nearby, companies can achieve higher bandwidth, lower latency, and regulatory compliance around location and data privacy. The result is the same degree of reliability and performance as large data centers.

Edge computing also delivers business benefits to a wide variety of industries. Today’s retailers rely on a 24/7 online presence to deliver superior customer experiences. Edge computing can prevent sites outages and increase availability for optimal site up-time. By ensuring factory floor operators stay connected to plant systems, edge computing can improve a manufacturer’s operational efficiencies. And for healthcare practitioners, edge computing can place computing close to a device, such as a heart-rate monitor, ensuring reliable access to possibly life-saving, health-related data.

The speed at which the world is generating data shows no signs of slowing down. As volumes mount, edge computing is becoming more than an IT necessity; it’s a critical competitive advantage in the future.

To discover how edge computing can deliver technology and business benefits to your organization, visit APC.com

Let's block ads! (Why?)


Thanks to Brand Post (see source)

Merger Complete: Appeals Court Rejects Bid to Throw Out AT&T-Time Warner Merger

The merger of AT&T and Time Warner (Entertainment) is safe.

A federal appeals court in Washington handed the U.S. Department of Justice its worst defeat in 40 years as federal regulators fought to oppose a huge “vertical” merger among two unrelated companies.

In a one page, two-sentence ruling, a three judge panel affirmed the lower District of Columbia Circuit Court decision that approved the $80 billion merger without conditions. In the lower court, Judge Richard Leon ruled there was no evidence AT&T would use the merger to unfairly restrict competition, a decision that was scorned by Justice Department lawyers and consumer groups, both claiming the merger would allow AT&T to raise prices and restrict or impede competitors’ access to AT&T-owned networks.

In this short one-page ruling, a three judge panel of the D.C. Court of Appeals sustained a lower court’s decision to allow the merger of AT&T and Time Warner, Inc. without any deal conditions.

The Justice Department and its legal team seemed to repeatedly irritate Judge Leon in the lower court during arguments in 2018, making it an increasingly uphill battle for the anti-merger side to win.

Judge Leon

Unsealed transcripts of confidential bench conferences with the attorneys arguing the case, made public in August 2018, showed Department of Justice lawyers repeatedly losing rulings:

  • Judge Leon complained that the Justice Department used younger lawyers to question top company executives, leading to this remarkable concession by DoJ Attorney Craig Conrath: “I want to tell you that we’ve listened very carefully and appreciate your comments, and over the course of this week and really the rest of the trial, you’ll be seeing more very gray hair, your honor.”
  • Leon grew bored with testimony from Cox Communications that suggested Cox would be forced to pay more for access to Turner Networks. Leon told the Cox executive to leave the stand and demanded to know “why is this witness here?”
  • Leon limited what Justice lawyers could question AT&T CEO Randall Stephenson about regarding AT&T’s submissions to the FCC.
  • Justice Department lawyer Eric Walsh received especially harsh treatment by Judge Leon after the Justice lawyer tried to question a Turner executive about remarks made in an on-air interview with CNBC in 2016. Leon told Walsh he already ruled that question out of order and warned, “don’t pull that kind of crap again in this courtroom.”

During the trial, AT&T managed to slip in the fact one of its lawyers was making a generous contribution towards the unveiling of an official portrait of Judge Leon, while oddly suggesting the contribution was totally anonymous.

“One of our lawyers on our team was asked to make an anonymous contribution to a fund for the unveiling of your portrait,” AT&T lawyer Daniel Petrocelli told the judge. “He would like to do so and I cleared it with Mr. Conrath, but it’s totally anonymous.”

Leon responded he had no problem with that, claiming “I don’t even know who gives anything.”

AT&T also attempted to argue that the Justice Department case against the merger was prompted by public objections to the merger by President Trump, who promised to block the deal if he won the presidency. That clearly will not happen any longer, and it is unlikely the Justice Department will make any further efforts to block the deal.

AT&T received initial approval of its merger back in June and almost immediately proceeded integrating the two companies as if the Justice Department appeal did not exist. The Justice Department can still attempt to appeal today’s decision to the U.S. Supreme Court, something AT&T hopes the DoJ will not attempt.

“While we respect the important role that the U.S. Department of Justice plays in the merger review process, we trust that today’s unanimous decision from the D.C. Circuit will end this litigation,” AT&T said in a statement.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Linux security: Cmd provides visibility, control over user activity

BrandPost: How edge computing will bring business to the next level

What do embedded sensors, ecommerce sites, social media platforms, and streaming services have in common? They all produce vast volumes of data, much of which travels across the internet. In fact, Cisco estimates global IP traffic will grow to 3.3 zettabytes annually by 2021 – up three times compared to internet traffic in 2017.

For many businesses, these data packets represent treasure troves of actionable information, from customers’ buying preferences to new market trends. But as the volume and velocity of data increases, so too does the inefficiency of transmitting all this information to a cloud or data center for processing.

Simply put, the internet isn’t designed to factor in how long any given package will take to reach its destination. To complicate matters, that video of your employee’s new nephew is travelling over the exact same network as business-critical data. The result: network latency, costly network bandwidth, and data storage, security and compliance challenges. That’s especially true in the case of delay-sensitive traffic such as voice and video.

Getting to the source

No wonder businesses are increasingly turning to the edge to solve challenges in cloud infrastructure. Edge data centers work by bringing bandwidth-intensive content closer to the end user, and latency-sensitive applications closer to the data. Types of edge computing vary, including local devices, localized data centers, and regional data centers. But the objective is the same: to place computing power and storage capabilities directly on the edge of the network.

In fact, IDC research predicts in 3 years that 45% of IoT-created data will be stored, processed, analyzed, and acted upon close to, or at the edge of, the network and over 6 billion devices will be connected to the edge computing solution.

From a technology perspective, edge computing delivers a number of key advantages. By transforming cloud computing into a more distributed computing cloud architecture, disruptions are limited to only one point in the network.

For instance, if a wind storm or cyberattack causes a power outage, its impact would be limited to the edge computing device and the local applications on that device rather than the entire network. And because edge data centers are nearby, companies can achieve higher bandwidth, lower latency, and regulatory compliance around location and data privacy. The result is the same degree of reliability and performance as large data centers.

Edge computing also delivers business benefits to a wide variety of industries. Today’s retailers rely on a 24/7 online presence to deliver superior customer experiences. Edge computing can prevent sites outages and increase availability for optimal site up-time. By ensuring factory floor operators stay connected to plant systems, edge computing can improve a manufacturer’s operational efficiencies. And for healthcare practitioners, edge computing can place computing close to a device, such as a heart-rate monitor, ensuring reliable access to possibly life-saving, health-related data.

The speed at which the world is generating data shows no signs of slowing down. As volumes mount, edge computing is becoming more than an IT necessity; it’s a critical competitive advantage in the future.

To discover how edge computing can deliver technology and business benefits to your organization, visit APC.com

Let's block ads! (Why?)


Thanks to Brand Post (see source)

BrandPost: How is 802.11ax different than the previous wireless standards?

BrandPost: The Network Gets Personal

In a competitive environment, with so much emphasis on the need for communications service providers (CSPs) to offer more personalized services to increase customer loyalty, the network plays a crucial role, explains Kent McNeil, Vice President of Software for Blue Planet. While the connection between network infrastructure and the customer relationship isn’t obvious, it is actually what drives personalization of services and competitive edge.

Enhancing the customer experience and lowering churn rates are key objectives for CSPs; however, an influx of competition is challenging customer loyalty. Equally, leading-edge technologies, from devices to cloud, have created new visions for consumers and enterprises. This has significantly changed customer demands, as well as expectations about how those requirements are fulfilled. Customers expect more personalized services, with tailored offers and ease-of-use.

Are CSPs considering the whole suite of tools at their disposal when it comes to gaining customer satisfaction? Managing the front end with an easy-to-use CRM system and providing responsive customer care at key touch points are critical. So too is ensuring the billing system is accurate and reliable. These components are obvious. Yet the network has an equally important role in keeping customers satisfied.

Data feeds personalization

For the service provider, data is like gold dust. Without data, personalization of services is not possible. Data is what feeds the knowledge that enables operators to design policies, and it’s these policies that govern operational support systems (OSS) to achieve the most effective outcomes.

The network is the source of all this data. For personalization to be successful, CSPs need to collect data in real time about what services their customers are accessing and how they’re using those services. This insight builds a picture of customers’ interests and behaviors, allowing CSPs to anticipate future needs and create tailored services and offerings. It also allows a CSP to stand out among its competitors and increase customer loyalty and retention.

Too often, the first time a CSP knows about an unhappy customer is when they churn out. Loyalty is very low in the telecoms sector. If a CSP can differentiate and offer customers what they need, when they need it, customers will remain happy.

The network underpins data

Yet without ensuring that the network can effectively and efficiently deliver this data to back-end OSS, personalization will either fail or will not be speedy enough for the customer. As more automated systems are integrated to cope with the increase and variety of network traffic, solutions must be employed to intelligently monitor and control the network.

Control of the network will play a fundamental part for CSPs as the market continues to expand with web-scale or niche competitors. Owning that powerful data source and network infrastructure—and mining the data—will give CSPs an advantage over new players.

The federation advantage

High-quality delivery of a variety of services is essential to be competitive, yet long-term customer satisfaction can only be achieved if the network is reinforcing the positive customer experience that is established at the onset.

Many legacy network systems are constraining the “ultimate customer experience.” Silos in the back end mean that although the customer receives one package through one account, several manual steps are required to be completed, across multiple OSS, to deliver any chosen service. This fragmented approach results in service delivery inefficiencies, since numerous time-consuming operations are required to fulfil the said service. Silos between CSPs further add to the complexity of managing long-reach, site-to-cloud services.

The same challenges exist for ongoing management and assurance of services: time-consuming, error-prone, manual operations. With many different systems, it is difficult and cumbersome for a CSP to effectively monitor all systems in real time for ongoing quality assurance. This means faults and issues are often overlooked and only recognized when the customer complains. Network issues are not easily isolated, identified, or resolved.

Many CSPs face time and cost constraints when it comes to a complete overhaul of OSS, but this is not a necessity. The problem with existing systems is the lack of synchronization across the end-to-end infrastructure. By implementing a unifying system that overlays legacy systems, one can federate important monitoring data to ensure optimization of network performance and enable more efficient utilization of network resources in the delivery of new services. This unified approach enables rapid, accurate delivery and assurance of tailored services, and is a critical stepping stone to future success.

The network transforms the customer experience

Digital transformation is the in-vogue industry buzzword, but it’s not a meaningless term. It signals that the network is evolving to meet the dynamic needs of the new digitally focused and highly knowledgeable customer. This focus on the customer at the center is vital to CSPs’ success in this digital age—they need to ensure that network infrastructure is capable of supporting evolving customer demands, now and in the future. By leveraging their valuable core network infrastructure for data-driven, federated, streamlined operations, they are extremely well positioned to transform the customer experience.

Learn more about Blue Planet network orchestration here.

Let's block ads! (Why?)


Thanks to Brand Post (see source)

Networking for Nerds: Deconstructing Interconnection to the Cloud

Monday, February 25, 2019

Building the Network Automation Source of Truth

This is one of the “thinking out loud” blog posts as I’m preparing my presentation for the Building Network Automation Solutions online course. I’m probably missing a gazillion details - your feedback would be highly appreciated

One of the toughest challenges you’ll face when building a network automation solution is “where is my source of truth” (or: what data should I trust). As someone way smarter than me said once: “You could either have a single source of truth of many sources of lies”, and knowing how your devices should be configured and what mistakes have to be fixed becomes crucial as soon as you move from gathering data and creating reports to provisioning new devices or services.

The first step on your journey should be building reliable device inventory - if you have no idea what devices are in your network, you cannot even consider automating the network deployment or operations. Don’t even try to use Ansible or a similar tool to get it - there are tons of open-source and commercial network discovery tools out there, and every decent network management system has some auto-discovery functionality, so finding the devices shouldn’t be a big deal.

Now for the fun part: assuming you didn’t decide to do a one-off discovery to populate the device inventory, will you trust the data in the network management system, or will you migrate the data to some other database (IPAM/CMDB software like NetBox immediately springs to mind) and declare that database your source of truth… as far as device inventory goes.

In any case, your network automation tool (Ansible, Chef, Puppet, Salt, Nornir… isn’t it wonderful to have so many choices?) expects to get device inventory in its own format, which means that you have to export data from your chosen source of truth into the format expected by your tool, unless of course, you believe that a bunch of YAML or JSON files stored in semi-random places and using interestingly convoluted inheritance rules is the best possible database there is.

If you decide to use text files as your database and Notepad as your UI, go with YAML. Regardless of all the complaints you might be hearing on Twitter, it’s still easier to read than JSON.
Should you do the export/translation every time you run your automation tool, periodically, whenever something changes, or what? Welcome to one of the hard problems of computer science. I’ll try to give you a few hints in an upcoming blog post (as well as tackle another challenge: is your device configuration your source of truth and why is that a bad idea).

In case you want to know more:

You can get access to the Ansible webinar with Standard ipSpace.net Subscription and access to the network automation online course with Expert Subscription.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

ICANN urges adopting DNSSEC - now

Powerful malicious actors continue to be a substantial risk to key parts of the Internet and its Domain Name System security infrastructure, so much so that The Internet Corporation for Assigned Names and Numbers is calling for an intensified community effort to install stronger DNS security technology. 

Specifically ICANN is calling for full deployment of the Domain Name System Security Extensions (DNSSEC) across all unsecured domain names. DNS,often called the internet’s phonebook, is part of the global internet infrastructure that translates between common language domain names and IP addresses that computers need to access websites or send emails.  DNSSEC adds a layer of security on top of DNS.

DNSSEC technologies have been around since about 2010 but are not widely deployed, with less than 20 percent of the world’s DNS registrars having deployed it, according to the Regional Internet address Registry for the Asia-Pacific region (APNIC).

DNSSEC adoption has been lagging because it was viewed as optional and can require a tradeoff between security and functionality said Kris Beevers, co-founder and CEO of DNS vendor NS1.

DNSSEC prevents attacks that can compromise the integrity of answers to DNS queries by cryptographically signing DNS records to verify their authenticity, Beevers said.

“However, most implementations are incompatible with modern DNS requirements, including redundant DNS setups or dynamic responses from DNS-based traffic-management features,” Beevers said. “Legacy DNSSEC implementations break even basic functions, such as geo-routing, and is hard to implement across multiple vendors, which means poor performance and reduced availability for end users.”

Full deployment of DNSSEC ensures end users are connecting to the actual web site or other service corresponding to a particular domain name, ICANN says “Although this will not solve all the security problems of the Internet, it does protect a critical piece of it – the directory lookup – complementing other technologies such as SSL (https:) that protect the "conversation", and provide a platform for yet-to-be-developed security improvements,” ICANN says.

In a release calling for the increased use of DNSSEC technologies, ICANN noted that recent public reports show a pattern of multifaceted attacks utilizing different methodologies.

“Some of the attacks target the DNS, in which unauthorized changes to the delegation structure of domain names are made, replacing the addresses of intended servers with addresses of machines controlled by the attackers. This particular type of attack, which targets the DNS, only works when DNSSEC is not in use,” ICANN stated.

“Enterprises that are potential targets – in particular those that capture or expose user and enterprise data through their applications – should heed this warning by ICANN and should pressure their DNS and registrar vendors to make DNSSEC and other domain-security best practices easy to implement and standardized. They can easily implement DNSSEC signing and other domain security best practices with technologies in the market today,” Beevers said.  At the very least, they should work with their vendors and security teams to audit their implementations with respect to ICANN's checklist and other best practices, such as DNS delivery network redundancy to protect against DDoS attacks targeting DNS infrastructure, Beevers stated.

ICANN is an organization that typically thinks in decades, so the immediacy of the language – "alert", "ongoing and significant risk" – is telling. They believe it is critical for the ecosystem, industry and consumers of domain infrastructure to take urgent action to ensure DNSSEC signing of all unsigned domains, Beevers said.

“ICANN's direction drives broader policy decisions and actions for other regulatory bodies, and just as importantly, for major technology players in the ecosystem,” Beevers said.  “We are likely to see pressure from major technology players like browser vendors, ISPs and others to drive behavioral change in the application-delivery ecosystem to incentivize these changes. “

ICANN’s warning comes on the heels of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) warning in January that all federal agencies should bolt down their Domain Name System in the face of a series of global hacking campaigns.

CISA said in its emergency directive that it is tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive-branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”

CISA says that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records.  Then the attacker alters DNS records, like address, mail exchange or name-server, replacing the legitimate address of the services with an address the attacker controls.

These actions let the attacker direct user traffic to their own infrastructure for manipulation or inspection before passing it on to the legitimate service, should they choose. This creates a risk that persists beyond the period of traffic redirection, CISA stated. 

CISA noted that FireEye and Cisco Talos researchers had reported that malicious actors obtained access to accounts that controlled DNS records and made them resolve to their own infrastructure before relaying it to the real address. Because they could control an organization’s DNS, they could obtain legitimate digital certificates and decrypt the data they intercepted – all while everything looked normal to users.

ICANN offered a checklist of recommended security precautions that  members of the domain-name industry, registries, registrars, resellers and related others shoudl take to protect their systems, their customers’ systems and information reachable via the DNS.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

Windstream Declares Bankruptcy; Another Legacy Telco Falters

Windstream Holdings, Inc. filed bankruptcy this afternoon, citing its inability to cover $5.8 billion in outstanding debt.

The independent phone company, which provides legacy landline and broadband service to around 1.4 million customers in 18 states, filed voluntary to reorganize under Chapter 11 of the U.S. Bankruptcy Code in the U.S. Bankruptcy Court for the Southern District of New York, citing a judge’s decision almost two weeks ago that the company defaulted on its obligations.

“Following a comprehensive review of our options, including an appeal, the Board of Directors and management team determined that filing for voluntary Chapter 11 protection is a necessary step to address the financial impact of Judge Furman’s decision and the impact it would have on consumers and businesses across the states in which we operate,” said Tony Thomas, president and chief executive officer of Windstream. “Taking this proactive step will ensure that Windstream has access to the capital and resources we need to continue building on Windstream’s strong operational momentum while we engage in constructive discussions with our creditors regarding the terms of a consensual plan of reorganization.”

Windstream received a commitment from Citigroup Global Markets Inc. for $1 billion in debtor-in-possession (“DIP”) financing. Assuming a bankruptcy judge approves of the arrangement, Windstream claims this stop-gap financing will allow it to run its current business as usual.

Windstream provides residential service in 18 states including: Alabama, Arkansas, Florida, Georgia, Iowa, Kentucky, Minnesota, Mississippi, Missouri, Nebraska, New Mexico, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, South Carolina and Texas.

The company claims it was forced into bankruptcy after a judge found Windstream’s attempt in 2015 to shift its valuable fiber optic network assets off its own books into a sheltered real estate investment trust (REIT) named Uniti Group violated the rights of bondholders which hold some of Windstream’s debt. Those debts are backed, in part, by the valuable fiber optic assets Windstream had  spun them off to a new entity. In fact, Uniti’s fiber optic assets are essential to Windstream’s viability. The phone company has the exclusive right to use Uniti’s fiber assets and two-thirds of Uniti’s revenue comes from Windstream, making the two companies inseparable.

Windstream’s bankruptcy is a concern to investors of both companies because it will allow Windstream to renegotiate the terms of its contract with its fiber partner. Windstream customers are equally concerned because the phone company needs Uniti’s network to manage its broadband service.

The judge’s decision on Feb. 15 to declare the arrangement inappropriate was reportedly a shock to the investor community, which has made money buying repackaged corporate debt in the form of bonds for years. Corporations have issued bonds to retire older debt, while giving investors a piece of the action. Since investors are making money, they typically do not complain too loudly about the persistence of corporate debt, frequently repackaged in new bonds. As a result, companies can hold onto more cash used to pay shareholder dividends and executive compensation instead of permanently retiring debt.

Aurelius, a hedge fund, is making some of its money scrutinizing these arrangements looking for contract violations such as the Uniti spinoff. When it finds one, it takes a stake in the company and then threatens to sue as a harmed investor. Based on the judge’s decision, Aurelius won a judgment that will effectively empty the pockets of many of the bondholders and investors that could lose a lot of their investments because of the bankruptcy. If the hedge fund is going to actively seek other questionable arrangements or violations of bondholders’ rights at other companies, it could cause an earthquake in an investment community that has quietly conspired with companies to generate transactions that enrich investors while allowing companies to carry more debt.

Customers could end up covering some of the costs of today’s bankruptcy filing if Windstream files a plan with the Bankruptcy Court promising to raise prices to help it demonstrate ongoing viability.

Windstream’s Thomas complained the phone company is little more than a victim of a predatory hedge fund out to enrich itself at the expense of others.

“The company believes that Aurelius engaged in predatory market manipulation to advance its own financial position through credit default swaps at the expense of many thousands of shareholders, lenders, employees, customers, vendors and business partners,” Thomas said. “Windstream stands by its decision to defend itself and try to block Aurelius’ tactics in court. The time is well-past for regulators to carefully examine the ramifications of an unregulated credit default swap marketplace.”

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Hidden Rate Hike: Spectrum Drops Premium Networks from TV Bundles

Spectrum cable television customers with Silver or Gold tiers will find two premium channels have disappeared from channel lineups, with no corresponding decrease in rates.

This hidden rate increase took effect Feb. 15 after Spectrum dropped Cinemax from its Silver and Gold packages and EPIX from its Gold package, with little explanation. Customers have been notified they can acquire these channels a-la-carte, for an additional $9.99/mo for Cinemax and $5.99/mo for EPIX.

The premium network cutbacks were originally planned to be significantly worse, however, after Charter Communications notified some customers it was also planning to delete Starz and Encore from its Gold tier, potentially making the $40 add-on not worth the price. Just days before the changes were to take effect, Charter changed its mind about Starz and Encore, allowing those channels will continue to be available as part of the Gold package.

Some customers are upset about the changes.

“It’s a hidden rate hike,” complained Lois Blumenthal. “We are still paying the same price for Silver or Gold, only getting fewer channels for it.”

Spectrum customer service appeared to be sensitive to customer complaints and threats to downgrade cable TV service, which would only increase the impact of cord-cutting. So the company is offering a hidden deal to current customers who subscribed to Silver or Gold TV tiers before Feb. 15 and who call 1-855-70-SPECTRUM to share their displeasure about the changes:

  • Silver Plan customers qualify for one year of Cinemax at no charge, after which the network will cost $9.99/month.
  • Gold Plan customers qualify for one year of Cinemax -and- one year of EPIX at no charge, after which Cinemax will cost $9.99/mo and EPIX will cost $5.99/mo.

Customers can ask about these promotions when they call. While no expiration date was available on these offers, it makes sense to call sooner rather than later in case they disappear.

It could have been worse. Spectrum notified many of its subscribers the premium network cutbacks originally envisioned also included Starz and Encore. Charter changed its mind, but it was too late to stop notifying some subscribers about the channel deletions.

Spectrum has adjusted its advertising:

Spectrum Silver (includes TV Select — add $20 a month)

  • 175+ cable channels with FREE HD
  • Includes HBO, SHOWTIME & NFL Network
  • On-the-go with HBO GO, SHOWTIME ANYTIME
  • Enjoy thousands of On Demand choices to watch when & where you want
  • Watch on your Apple TV, Samsung Smart TV, Roku, Xbox One, tablet, smartphone or visit SpectrumTV.com
  • Download 80+ network apps and take on-the-go

Spectrum Gold (includes TV Select and TV Silver — add $40 a month)

  • 200+ cable channels with FREE HD
  • Includes HBO, SHOWTIME, STARZ, TMC, ENCORE, NFL Network & NFL Redzone
  • Enjoy thousands of On Demand choices to watch when & where you want
  • Watch on your Apple TV, Samsung Smart TV, Roku, Xbox One, tablet, smartphone or visit SpectrumTV.com
  • Download 80+ network apps and take on-the-go

For all Spectrum customers, the cost of adding most premium add-on channels a-la-carte (without a promotion) decreased effective Feb. 15:

  • HBO remains unchanged at $15/mo
  • Showtime remains unchanged at $15/mo
  • Starz was $15, decreasing to $9.99
  • Encore was $15, decreasing to $5.99
  • Cinemax was $15, decreasing to $9.99
  • TMC was $15, decreasing to $9.99
  • EPIX was $15, decreasing to $5.99

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Verizon Steps Up FiOS Promotions: Free Netflix 4K Premium + $20 Discount for Verizon Wireless Customers

Verizon is getting more aggressive about its promotions to attract new customers and keep existing ones happy with new offers that include free Netflix Premium and up to $20 in monthly discounts for Verizon Wireless customers choosing a double or triple play FiOS package.

Verizon claims some of these offers are available to new and “qualified” existing customers until Apr. 3, 2019, and could deliver significant savings on plans that range in price from $39.99 to $79.99 a month.

The budget minded broadband-only 100 Mbps plan offers a $50 Visa prepaid card and one year of service for $39.99 a month. This plan is only available to new customers, does not include the $12/month router charge, or fees or taxes. Customers have to sign up for autopay using a checking account or debit card, choose paper-free billing, and pass a basic credit check.

Those new or existing customers looking for faster internet-only service can choose the 300 Mbps plan for $20 more — $59.99 a month, pricelocked for two years. This plan includes six months of Netflix Premium, which supports Ultra High Definition (UHD/4K) streaming and allows up to four devices to stream at the same time. This is a $15.99/month value. These prices do not include the $12/month router charge, fees or taxes.

If you want to avoid the router fee and get gigabit speed, pay $20 more ($79.99) and new and existing customers can lock in service for three years with no router rental fee (a $12/mo value) and one full year of Netflix Premium (a $15.99/mo value).

Verizon also offers a triple play package to new customers including Custom TV, a slimmed down customizable TV package, with landline phone service and gigabit speed internet for two years at $79.99 a month, with one year of Netflix Premium (a $15.99/mo value). This plan has a two-year contract with a $350 early termination fee. There are a number of fine print fees to consider, however. Verizon charges a $12/mo fee for the set-top box, $12/mo router charge, $4.49/mo Broadcast TV surcharge and up to $7.89/mo Regional Sports Network surcharge. Also not included in the promotional price — a $0.99 “FDV Administrative Fee,” whatever that is. Altogether, these extra fees add $37.37 a month to the bill, turning the real price of this promotion into as much as $117.69 a month before other taxes and fees. Customers also have to sign up for autopay using a checking account or debit card, choose paper-free billing, and pass a basic credit check.

Somewhat reducing the sting of surcharges and fees on the triple play offer noted above is a discount worth $20 a month if you are a Verizon Wireless customer with a qualifying Go Unlimited or Beyond Unlimited plan. A bill credit of $10 a month will appear on your FiOS bill and another $10/mo credit will appear on your monthly Verizon Wireless bill as long as you maintain both qualifying FiOS and wireless plans.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

IDG Contributor Network: Named data networking: Stateful forwarding plane for datagram delivery

The Internet was designed to connect things easily, but a lot has changed since its inception. Users now expect the internet to find the “what” (i.e., the content), but the current communication model is still focused on the “where.”

The Internet has evolved to be dominated by content distribution and retrieval. As a matter of fact, networking protocols still focus on the connection between hosts that surfaces many challenges.

The most obvious solution is to replace the “where” with the “what” and this is what Named Data Networking (NDN) proposes. NDN uses named content as opposed to host identifiers as its abstraction.

How the traditional IP works

To deliver packets from a source to a destination, IP needs to accomplish two phases of operation. The first phase is the routing plane also known as the control plane. This phase enables the routers to share routing updates and select the best path to construct the forwarding information table (FIB). The second phase is the forwarding plane also known as the data plane. This is the phase where forwarding to the next hop is executed upon FIB examination.

Primarily, the actual routing (control plane) is stateful and can adapt to the network changes, such as link down, router crashes, new routes or alternative better paths. However, the actual IP forwarding is stateless and cannot adapt to anything without instruction from the routing control plane. This has often been referred to as “smart routing and dumb forwarding.”

The IP is only interested in packets for a particular destination. Typically, you put a node on a network, and it performs a broadcast known as address resolution protocol (ARP) for the destination address. It then binds the logical IP identifier to the physical identity, which is the media access control address (MAC).

Routing can then propagate the routes and there is no need to propagate the host addresses as they can be covered by the prefix. The MAC binding is only needed on the local network and not for global communications. Such kind of communication model has allowed the IP to scale up.

Introducing NDN forwarding

Significantly, Named Data Networking (NDN) goes beyond the traditional paradigm and adds intelligence and state to each device. For this, it uses a new network forwarding plane. Data consumers, similar to source hosts in the IP word, send what's known as Interest packets for the data they are looking for. Further, the nodes in the path forward the Interest packets and eventually maintain the state of all pending Interest requests.

So, we have two types of packets, Interest and Data packets. Both packets carry a name instead of an IP address. Hence, Interest and Data packets are one-for-one, thereby enabling strict flow balance.

The data consumers enter a given name of the desired piece of data that they want to retrieve into the Interest packet and then send the Interest packet to the network. You could relate the Interest packet similar to a hypertext transfer protocol (HTTP) Get-request. However, please be mindful of the fact that HTTP is an application protocol and NDN is a network layer protocol although NDN has a similar request and retrieval process.

Essentially, if you want data, you logically broadcast it. You say what your interest is and give the prefix of what you want. The consumers broadcast an Interest packet over any and all available communication paths.

NDN has a simpler relationship with Layer 2 than that of IP, which allows NDN to broadcast all the multiple simultaneous connectivities, such as 3G, Bluetooth, and Ethernet. Unlike IP, NDN packets cannot loop which is why it can take advantage of multiple interfaces.

More information on NDN names

The primary concern of NDN is data, not nodes, so there is no need to bind a Layer 3 IP address to a Layer 2 MAC address. There simply is no IP address. Fundamentally, the NDN network layer uses application data names to communicate.

A name is an opaque object and the only thing NDN cares about is that it has a hierarchical structure. The hierarchical structure is used to do the “longest match” lookup which is similar to the IP prefix lookups. Both IP and NDN use the “longest match” prefix-based lookups because of the hierarchical aggregation of details.

Nodes in the path use this name to forward the Interest packet to the data producers which could be in multiples. Largely, there is no concept of a destination in the world of NDN unlike IP. Once an Interest packet hits a node that holds the requested data, that node will ultimately return the data packets that contain both, the name and the content.

The core difference between IP and NDN

Essentially, IP has a location-centric approach to data delivery. There is a concept of the destination and the packet must reach the destination. The destination address is used to guide the forwarding traffic flows towards a destination.

With NDN the name is used to guide the forwarding. In the case of NDN, there may be multiple devices that can hold the requested data, but with IP there is usually only one destination.

NDN data structures

For this new style of forwarding to occur, we need new data structures. The new data structures include the pending interest table (PIT), forwarding information table (FIB) and content store (CS). 

Particularly, when an Interest packet hits a node, the node first checks the content store. The content store remembers the data it has seen before. It's like a buffer in normal routers but with a different replacement property.

If there is a match, the node is known as a data producer. Hence, it will return the data back to the same interface the Interest packet was received from. If there is no match, the router then looks at the PIT. The PIT holds all the pending Interests i.e. the Interest that has not been satisfied yet. It contains Interests that couldn't be satisfied locally and as a result, were sent to someone else. But you still have to remember it.

If no entry is found in the PIT, the router then examines the FIB for forwarding. Here, we need to keep in mind that the FIB has named prefixes, not IP prefixes and it uses multiple outbound interfaces.

Stateful forwarding plane

NDN has a stateful forwarding plane for datagram delivery (per packet and per hop) in comparisons to the IP’s stateless forwarding plane. As a matter of fact, IP forwarding by itself has no adaptability. It needs to be told what to do by the routing protocol. This is in contrast to the NDN communication model that keeps datagram state at every node. IP is end-to-end while NDN is hop-by-hop, thereby enabling a per-hop stateful forwarding plane. Each node in the path can make its own decision on where to forward the Interest.

A stateful forwarding plane adds significant intelligence. Substantially, nodes can measure the performance of different paths, rapidly detect failures, avoid failed links, circumvent prefix hijacking and utilize multiple paths to mitigate congestion, perform built-in network caching and native multicast data delivery.

Typically, each NDN node makes its own informed decision about which path to take. The decisions are based on the observation of the previously forwarded Interest packets. This eventually enables the per-datagram state on each node.

Unlike the IP’s end-to-end packet delivery model, the Interest and Data exchange communication model are hop-by-hop so there is no notion of a specific source and destination. Within the IP world if you are sent a datagram you better be able to deal with it. They only have one choice and you are that sole choice. You have to be able to handle the traffic that you are announcing. As a result, there are global dependencies.

However, with NDN there are no global dependencies. If there are 5 places where the content might be, it can ask all 5 simultaneously. Basically, there is no global dependency, just local dependencies.

NDN routing protocols and RTT

Similar to today’s IP architecture, NDN uses the routing protocols. The IP routing protocols propagate the reachability of IP addresses. However, with NDN, instead of distributing IP prefixes, the routing protocols distribute the name prefixes.

Like IP, NDN has a FIB but instead of having IP prefixes, it has named prefixes and each given name can utilize multiple interfaces. Generally speaking, the IP FIB contains only the next-hop information, whereas the NDN FIB records both, the routing and forwarding plane, which is known as adaptive forwarding.

The NDN FIB also contains a per-interface estimate of the RTT, which allows the measurement of path performance. Essentially, the RTT sample is taken every time a data packet is received.

This is however different from today's model that uses transmission control protocol (TCP) on the end host to control the congestion. TCP on the end host is used to detect congestion and adjust the sending rate accordingly.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Matt Conran (see source)

BrandPost: Silver Peak Powers an SD-WAN Telemedicine Backpack

The software-defined wide-area networking (SD-WAN) revolution knows no boundaries. Now the technology has found its way into a telemedicine backpack that can deliver real-time communications between doctors and first responders on scene in the field.

Telemedicine pioneer swyMed, based in Lexington, Mass., offers a high-performance telemedicine backpack called the DOT — Doctors on Tap — which enables reliable, real-time video communications powered by the Silver Peak Unity EdgeConnect™ SD-WAN edge platform that can improve the performance of existing wireless network communications and connect even at long distances from wireless towers.

The DOT is an impressive package of technology. The lightweight mobile unit incorporates antennas, redundant dual-modem wireless connections, an all day battery, an integrated speaker/microphone, two digital scopes, and a ruggedized sunlight-readable tablet with an integrated full HD camera. It also includes the solid state Silver Peak EdgeConnect Ultra Small (US) SD-WAN appliance, which integrates SD-WAN into the backpack, and serves as a network-on-demand providing the capability to securely and directly connect to more than one wireless network simultaneously.

The result is a high-powered, mobile communications network that can be carried by medical professionals into the field to provide advanced communications capabilities that reach beyond what standard wireless services offer. SwyMed CEO Stefano Migliorisi says the DOT can enable medical professionals to communicate in hard-to-reach areas using an SD-WAN that can overcome connectivity issues and improve performance by tying together disparate communications networks.

“We use very powerful antennas, our own patented technology, and the Silver Peak EdgeConnect appliance to connect to towers far away,” says Migliorisi. “It uses SD-WAN technology, forward-error connection (FEC), and path conditioning to maintain communications quality by bonding multiple SIMs to combine signals. With sub-second failover, communications are maintained continuously should one of the signals experience congestion or an outage. You can have a very good connection to a tower that’s very far away.”

Migliorisi says the DOT and Silver Peak technology have helped paramedics improve patient care in the field when they need to urgently communicate with their doctors. In one example, the patient wanted to speak to the doctor to get a recommendation on a preferred hospital. After speaking directly to the patient and paramedics, the doctor instructed the paramedics to transport the patient to a different hospital than originally intended, ultimately providing the patient with better care that may have saved his life.

“When the patient arrived at the hospital he went into cardiac arrest,” says Migliorisi. “The medical staff was able to resuscitate him and rapidly perform an emergency intervention. The doctor’s remote assessment directed the patient to a hospital with more extensive capabilities, so he was where he needed to be to receive the right level of care.”

The DOT is used internationally in places that have wide variations in communications infrastructure. For example, Migliorisi says it’s currently being used in a region in India to connect rural mothers with doctors over wireless networks where the wireless towers are very far apart.

SwyMed says the combination of its own patented software and the Silver Peak SD-WAN platform has been shown to boost the connectivity of poor wireless connections. By adding the capability to use multiple carriers simultaneously, the SD-WAN can make real-time adjustments in the data transport to deliver the highest quality of experience by aggregating multiple SIM cards.

For example, Migliorisi said some recent tests on Long Island showed that when the DOT was connected to one tower that was yielding 50 percent and another one yielding 70 percent, the two connections could be combined to create an aggregate signal strength of 90 percent.

Migliorisi says the Silver Peak technology is unique to the delivery of the DOT because of the seamless switching capability it provides between networks. He says that swyMed looked at other mobile SD-WAN solutions, but they dropped connections when switching from one mobile network to another. That is not a big deal for email or web browsing where a few seconds of delay in switching is not noticed. But in live video communication, even sub-second delays are picked up because our eyes and ears are so finely tuned to detect variations.

“It was very difficult to find anything that was seamless. Silver Peak has much more granularity. It’s a very sophisticated product.”

Migliorisi said the key features that differentiate the Silver Peak SD-WAN product include:

  • Built-in security between the DOT and a termination point at a private data center
  • The capability to apply different policies based on the region
  • Automatic handling of IP address changes due to service provider network address translation (NAT)
  • Subsecond switching between wireless networks without delay or lost connections

“It’s very easy to change the policies,” says Migliorisi. “There is granularity. You can centrally configure business intent overlays with Unity Orchestrator™. You can have one for Europe and one for Asia. It’s very flexible.”

The bottom line is that Migliorisi sees the Silver Peak SD-WAN platform as key to the DOT offering, which is now being offered in a few dozen places across the US, Europe and Asia. Deployments are expected to scale exponentially over the next several years.

To learn more about the SD-WAN backpack, listen to this podcast where swyMed’s COO Jeff Urdan describes how it works, the role Silver Peak Unity EdgeConnect™ plays in enabling 4G LTE connectivity, and stories of how the technology is used by first responders to help save lives.

Let's block ads! (Why?)


Thanks to Brand Post (see source)

From an Idea to an Innovation Center: An Australian Story 

Free configuration management using Ansible, Ubuntu, VirtualBox

Configuration management (CM) utilities can automate the configuration of network devices, saving time and eliminating many of the human errors introduced during manual configuration.

While this functionality is rolled up in software-defined networking and intent-based networking products, it can also be tapped for free using open-source software.

This article shows how to use the free Ansible CM utility from RedHat running on a free Ubuntu Linux operating system within a virtual server created with free VirtualBox software. For the purposes of this cookbook, Ansible is used to automate CM for Cisco IOS-based routers, but Ansible modules are available for other vendors gear and other utilities, including A10, Aruba, Citrix Netscaler, F5, Fortinet, Juniper, Palo Alto Networks and others.

Step 1: Launch the Ansible server

First, go to this site, download and install VirtualBox software to create a virtual machine on your computer where you can install Ubuntu and then run Ansible.

Go to this site and download the Ubuntu 18.04.1 LTS desktop edition to your local hard drive.

Now using VirtualBox, create an Ubuntu virtual machine on which to run Ansible.

In VirtualBox, click the blue “New” icon.

Type in the name of our virtual server: Ansible Server.

Select the Type: Linux.

Select the Version: Ubuntu (64-bit), click Next.

Give this system 4 GB of RAM (4096 MB), click Next.

Use the defaults of using 10GB of hard disk storage, leave “Create a virtual disk now” checked, click Create.

Leave the default VDI selected, click Next.

Leave the default “Dynamically allocated” selected, click Next.

Leave the file location and size default settings, click Create.

You see that one virtual server has been created.

Now you need to make some configuration settings before booting it up.

Select the Ubuntu virtual server from the list of virtual servers in VirtualBox and click on the “Settings” cog button.

Select Storage on the left pane of the Setting options.

Next to Controller: IDE, click the button to “Adds Optical Drive” > Choose Disk > Select the Ubuntu 18.04.1 ISO file downloaded previously.

Select Network, on the left pane of Setting options.

Select Bridged Adapter, under Adapter 1 tab, make sure “Enable Network Adapter” is checked.

Select Attached to: “Bridged Adapter” and under Name select the Ethernet interface of your computer, then click OK.

Now you are ready to start the virtual machine.

Click the “Start” green right-arrow icon. (Now the Ubuntu operating system will start to boot up.)

Select your language: English, Install Ubuntu

English, English, click Continue

Choose Normal Installation, Download updates while installing, click Continue.

Select Erase disk, click Install Now, click Continue.

Select your timezone, click Continue.

Enter a name, enter the virtual computer name, username and password, click Continue.

Let it install the software. This takes a few minutes.

Click “Restart Now” when prompted.

When it prompts “Remove the installation media”, press Enter.

Now the system is running, and you can logon to the VirtualBox console as the user you just created.

Go through the introductory screens > Next > Next > Next > Done.

The default resolution can be pretty small for many computers, so you can increase it if you like.  You might be able to simply scale the window to be larger by clicking and dragging the lower-right corner of the window.  You can also change the display size with the Displays app.  Click on the top-left “Activities”, type Displays, click on the Displays app. Under Resolution, select the resolution, Apply, Keep changes.

When prompted, install updates because you want to be working with the latest software. Enter your password and if prompted, restart the system.

Step 2: Configure the Ansible server

Now you want to further update and patch, and install some basic tools, python and Ansible on this system.

Log into the Ansible Server console in VirtualBox and launch a terminal window.

Click Activities, then type the word “Terminal” and the Terminal app will be listed. Click on that icon.

Run the following commands from the Terminal “$” prompt.  When you run the first sudo command you will be prompted for your password.

sudo apt update

sudo apt upgrade -y

sudo apt install openssh-server -y

sudo apt install net-tools -y

sudo apt install sshpass -y

sudo apt install tree -y

sudo apt install python python-pip python-setuptools -y

sudo apt install ansible -y

At this point you can use SSH to connect to the Ansible Server.

Let's block ads! (Why?)


Thanks to Scott Hogg (see source)