Thursday, January 31, 2019

Hargray to Acquire Dark Fiber Systems of Jacksonville, FL

SAVANNAH, GA and JACKSONVILLE, FL — Hargray, a regional communications provider and metro-fiber over-builder, announced an agreement to acquire Dark Fiber Systems (DFS), a dark fiber provider in Jacksonville, Florida. This announcement follows Hargray’s agreement to acquire USA Communications’ Alabama assets as well as Hargray’s acquisition of several colocation facilities.

“We are pleased to announce our agreement to acquire Dark Fiber Systems and the resulting expansion of our presence in northern Florida,” said Hargray Chairman & CEO, Michael Gottdenker. “Our focus has been and remains to provide robust last-mile connectivity to businesses and residences throughout the Southeastern United States. We are eager to expand DFS’ unique network and bring Hargray’s full suite of communications solutions and our unmatched, superior customer service to businesses in Jacksonville.”

Chris McCorkendale, senior vice president of Hargray Fiber, added, “Hargray is deeply committed to providing advanced broadband access throughout the Southeast. We are dedicated to local service, community engagement, and have witnessed firsthand the demand for reliable communications solutions and responsive customer service. Specifically in Jacksonville, not only will we expand the footprint of DFS’ 150 route-mile network, but also we will provide multi-gig lit data services including dedicated Internet, customized multi-location Ethernet solutions, colocation & managed services, and enhanced voice and video solutions.”

The transaction is expected to close by the end of the second quarter of 2019. Financial terms were not disclosed.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

Calix Introduces GigaMesh Satellite Wi-Fi Mesh Platform

SAN JOSE, CA — Calix announced the GigaMesh, the latest addition to the Calix Smart Home and Business solution powered by EXOS. The GigaMesh enables communications service providers (CSPs) to extend Wi-Fi coverage to all corners of their subscribers’ homes and elevate their revenue by monetizing the enhanced subscriber experience. The EXOS-powered GigaMesh is the ultimate “plug n play” wireless satellite that works alongside the GigaSpire to ensure unmatched Wi-Fi coverage in hard to reach areas within a home or business. Combined with Calix Marketing Cloud, CSPs can quickly realize this revenue opportunity by leveraging data-driven insights and to engage the ideal subscribers who are ready to extend this amazing managed Wi-Fi throughout their home.

Although subscribers want Wi-Fi to work everywhere, the myriad of different connected devices and hubs scattered throughout their homes has made this a challenge. The GigaMesh simplifies the extension of coverage into harder to reach areas. The GigaMesh plugs directly into a wall outlet, provides guidance for optimal in-home placement via a simple LED indicator, and is configured in seconds without any need for a support call or an on-site visit from the service provider.

“Since smart home devices are often equipped with weak Wi-Fi radios and low-quality antennas, despite being designed to make subscribers’ lives easier, they often cause headaches,” said Shane Eleniak, senior VP, platforms at Calix. “Calix is committed to ensuring that service providers can deliver the right experience for each subscriber. With a fully configurable portfolio of EXOS-powered systems such as the GigaSpire and now the GigaMesh, CSPs can monetize the tailored solutions they provide. For a smart home system to be truly smart, it has to match flexibility with performance. The Calix GigaMesh ensures this level of performance by maximizing Wi-Fi coverage, delivering lightning-fast internet speeds that can exceed 1.2 Gbps.”

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

AFL Introduces Plenum-rated Indoor/Outdoor Loose Tube Cable

SPARTANBURG, SC — AFL is introducing the LQ-Series I/O Plenum Loose Tube Cable, developed to provide a lower cost inter-building cabling solution for campus-based networks. Because the LQ cable is tested and qualified to meet the leading industry cable standards for outside plant (OSP) and inside plant (ISP) plenum-rated fiber optic cables, it can be used as a single-cable solution for OSP and ISP cable installations. This dual-rating functionality eliminates the need to use separate cables for OSP and ISP route segments and the associated requirement to include a splice transition point within 50 feet of the inside plant to connect the two separate cables.

Combined Outside and Inside Plant Ratings
“The National Fire Protection Agency (NFPA) NEC guidelines allow for outside plant-only cables to be routed into a building space for up to only 50 feet,” explained Stephen Martin, RCDD, senior product manager for AFL’s Enterprise Cable division. “If the termination point of the outside plant cable is beyond 50 feet from the point of entrance into the building, the cable must be placed into a fire-protected pathway or spliced to a properly flame-rated inside plant cable.”

Martin continued, “AFL’s LQ-Series Indoor/Outdoor Plenum Loose Tube cable, with combined outside and inside plant ratings, eliminates the need to transition to a separate flame-rated inside plant cable when in-building termination points are beyond 50 feet from the entrance location.”

The cable construction consists of 12-fiber, gel-free buffer tubes stranded around a central strength member. The finished core is jacketed with a highly flame-retardant, UV-resistant thermoplastic. The LQ-Series cable is available with 12 up to 144 single-mode or multimode fibers. Applications include inter-building campus backbone connections, installation in OSP buried pathways or above-ground exposed cable trays, in cable routes that require cables to transit OSP space, and inside plant environments that require cables to be riser or plenum-rated.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

C Spire Announces Consortium to Tackle Rural Digital Divide

RIDGELAND, MS — C Spire, in coordination with Airspan Networks, Microsoft, Nokia and Siklu, announced their collaboration that will test and deploy a variety of broadband technologies in combination with new service and construction models to advance broadband connectivity in rural communities. In addition, the consortium will advance new models for coordination with regional fixed and wireless internet service providers, utilities and other regional stewards. The consortium’s goal is a new blueprint on a shared way forward to close the broadband adoption and affordability gap in rural communities across America.

C Spire Tech Movement Initiative
As one of the nation’s leading regional fixed and wireless broadband providers, C Spire is positioned to disrupt the current approach to rural broadband and address these challenges. These efforts are part of the broader C Spire Tech Movement initiative, which is committed to moving communities forward through technology with a focus on broadband access, workforce development and innovation. “Access to and adoption of broadband technology is critically important to economically move rural communities forward and ensure they are not left behind in today’s new digital economy,” said C Spire President Stephen Bye.

The gap between cities and rural parts of the country is substantial. According to a 2018 report from the Federal Communications Commission, over 19.4 million rural Americans still lacked basic broadband at the end of 2017. This is having profound impacts on the nation’s rural communities.

Unfortunately, the problems are even more acute in states like Mississippi and Alabama where nearly a third of rural residents have no access to basic broadband. A 2017 Mississippi State University Center for Technology Outreach study found that the state’s rural counties lose millions of dollars a year in deferred economic benefits due to lack of availability and slow internet speeds. The report concluded that those same counties will lose billions of dollars over a 15-year period.

Scaling at the Edge
“Our nation’s broadband adoption gap is a solvable problem that will not be limited in the next few years by the coming breath of new technologies themselves, but rather how well we facilitate them to scale at the edge,” said C Spire Chief Innovation Officer Craig Sparks. “Hyperlocal collaboration and highly automated tools combined with these easy-to-deploy network technologies are going to be key enablers,” he added. “C Spire’s intent is to bring the thought leaders in this consortium together to disrupt our own thinking in these areas.”

Using the real-world testbeds of Mississippi and Alabama, the consortium will begin the work and share collaborative learnings through a series of studies and open workshops over an 18-month engagement. The shared learnings are intended to facilitate affordable broadband internet access and help drive further adoption through digital education and community outreach. The consortium intends to address these problems in ways that can scale not only in regional testbeds, but throughout the nation at large.

C Spire will initiate this rural broadband effort on Jan. 29-31, 2019 by hosting the International Wireless Industry Consortium’s “Enhancing Rural Connectivity, New Wireless Opportunities and Deployment Scenarios” workshop in New Orleans. While closed to the press, all consortium partners will attend, along with other industry experts, for technical discussions.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

Superior Essex Announces New ADSS Fiber Product Line

ATLANTA — Superior Essex, a manufacturer and supplier of communications cable and accessory products serving the communications industry, is streamlining fiber-optic network installations with new all dielectric self support (ADSS) product line.

EnduraSpan, a new fiber-based small and lightweight single-jacket self-support cable geared towards both short- and long-span aerial applications. This product line is designed for versatility and ease of use, allowing any service provider or carrier including telcos, CLECs, municipalities and public utility operators to efficiently install new fiber-optic network cable in a variety of aerial environments. Primary applications include railways, and power and telecommunications pole routes. It is an economical solution for single-pass installations that require non-metallic, self-support cable.

EnduraSpan utilizes a stranded loose-tube design able to withstand handling hardships while also allowing easy midspan access. The efficient design provides elevated tensile strength, long-term reliability and includes dry-core technology with PFM gel-filled loose tubes for water ingress protection. Benefits include both reduced network costs and cable prep time.

EnduraSpan is offered in spans from 25 to 400 meters for light, medium and heavy loading conditions and fiber counts up to 144.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

What is a digital twin? [And how it’s changing IoT, AI and more]

DRaaS options grow, but no one size fits all

AutoNation spent years trying to establish a disaster recovery plan that inspired confidence. It went through multiple iterations, including failed attempts at a full on-premises solution and a solution completely in the cloud. The Fort Lauderdale, Fla.-based auto retailer, which operates 300 locations across 16 states, finally found what it needed with a hybrid model featuring disaster recovery as a service.

“Both the on-premises and public cloud disaster recovery models were expensive, not tested often or thoroughly enough, and were true planning and implementation disasters that left us open to risk,” says Adam Rasner, AutoNation’s vice president of IT and operations, who was brought on two years ago in part to revamp the disaster recovery plan. The public cloud approach sported a hefty price tag: an estimated $3 million if it were needed in the wake of a three-month catastrophic outage. “We were probably a little bit too early in the adoption of disaster recovery in the cloud,” Rasner says, noting that the cloud providers have matured substantially in recent years.

AutoNation, which also owns collision centers, auction houses and launched its own precision parts line in 2018, has a new disaster recovery plan that features a blend of colocation-based and as-a-service-based disaster recovery, with 75% of applications targeted to recover from a Denver colocation facility and 25% from Amazon Web Services. The environments are orchestrated by DRaaS provider Cohesity and its secondary data management platform, which backs up and replicates virtual servers, applications and data to the colocation facility and to AWS. Cohesity also manages failover and recovery.

“The ability in a disaster to flip a switch and automatically spin up VMs off-premises lets me sleep better at night,” Rasner says.

What is DRaaS?

The DRaaS market is a complex scene. There are hundreds of DRaaS providers, all with different approaches and capabilities for replicating and hosting servers and data. Some DRaaS services focus on virtual servers, while others also back up physical servers; some rely on on-site backup appliances, others don’t. It’s a growing market, as enterprises look to third-party providers to provide failover in the event of natural disasters or service disruptions. Market research firm Technavio predicts the global DRaaS market will expand at a compound annual growth rate of nearly 36% between 2018 and 2022.

For Ken Adams, CIO of Miles & Stockbridge in Baltimore, DRaaS is a way to fully embrace the cloud but still address compliance demands for the 480-employee law firm. ISO standards require law firms to preserve data in three different locations. As an early adopter of the cloud, Adams embraced as-a-service early on and saw the opportunity to use it for disaster recovery. 

Miles & Stockbridge uses ClearSky Data’s on-demand platform and appliances to access and store virtual servers and data locally and in a colocation facility in Virginia, and to send data out to a third location: a virtual cache server on Amazon’s AWS, which Adams calls his “insurance policy.”

ClearSky was originally just a storage platform for us, and then we decided to try putting our servers on the appliances, which have solid-state drives. We had no performance hit on the servers and we got that extra protection from having the servers – not just the data – ready in multiple locations,” he says.

While the appliance in Virginia is updated almost in real time, the AWS version of data is a little older, saving on traffic. Disaster recovery, he says, is now easy. “You just push a button in the ClearSky console that works with VMware and fails over from one environment to the other.” 

Adams has dedicated fiber lines from two different ISPs connecting the ClearSky appliances so they can easily handle the heavy demands of applications such as litigation support. However, he says the burden on them is not as great as it might be because some applications such as the firm’s document management solution are already accessed as SaaS, giving them built-in disaster recovery.

Which apps are a fit for DRaaS?

Spencer Suderman, principal consultant for tech research and advisory firm ISG in Stamford, Conn., says as interest grows in DRaaS and more players enter the market, IT teams have to consider the needs of their servers and data. While some servers and applications might port easily to a cloud-based “as a service” disaster recovery environment, others might be resistant because they are proprietary or are highly interdependent with other applications. 

If IT thought getting applications to the cloud in the first place was difficult, adding on DRaaS certainly adds to the complexity, Suderman says. For instance, containerized applications in virtual servers might not be able to fail over or recover properly. “A virtualized server still has dependencies,” he says. And, even if the application works, the data transport might cause issues. “Let’s just say you have a recovery time objective of six hours. If you have a terabyte of data on a 100M bit/sec link, it will take you 23 hours to download all that data. You won’t be able to meet your RTO,” he says.

AutoNation’s Rasner finds that the scope of applications suitable for DRaaS is limited in the automotive industry, where it’s common to have legacy applications that were custom-built or have a lot of tentacles into other applications such as AutoNation’s 13-year-old CRM system. AWS, Rasner says, is best suited to off-the-shelf and stand-alone applications such as AutoNation’s equity mining tool, which helps service teams determine if customers would find better value completing an expensive repair or buying a new car. AWS also houses backups older than 40 days. As legacy applications are refreshed or refactored, Rasner says they will be added to the AWS disaster recovery environment. 

ISG's Suderman recommends intensive planning and monthly, bi-monthly, or quarterly drills with the DRaaS provider. “Disaster recovery is probably one of the most under-planned services,” and he anticipates DRaaS, where you’re handing off some responsibilities to a provider, will only make it worse. “Everyone talks a good game about disaster recovery, but what’s the breadth and depth of the planning you’ve done for a real disaster? DRaaS drills will tell you how portable your environment really is.”

Some considerations: Are all your applications in one place and on virtual machines that can be quickly spun up? Is your data fresh? How long can your organization stand to be down, and does your provider know the priority of your applications and data?

Perhaps the most important question, if you are in a highly regulated industry: Do you have visibility into your disaster recovery sites? “You might not be able to tell where your application is running if you’re using cloud-based infrastructure,” Suderman says.

Getting started with DRaaS

Vishal “Steve” Mathur, senior IT manager at Baltimore-based food manufacturer TIC Gums, is at the start of his company’s DRaaS journey. His first step was to redo the company’s WAN infrastructure, which had relied on a single MPLS line out to the company’s three sites. “When our MPLS line went down, all three sites were shut down because we couldn’t get to the Internet for Office365 or Salesforce,” he says.

Now TIC Gums has built-in redundancy with three lines from three separate ISPs and independent firewalls at each site that provide high availability to sustain cloud-based backup, storage, and disaster recovery. “With the infrastructure we had, it would have taken days, if not weeks to bring the business back up,” Mathur says.

Although the company initially thought it would implement disaster recovery on a platform like AWS or Microsoft Azure, Mathur charted out a score card that put Expedient’s DRaaS offering ahead of the others. “The biggest question we always returned to was: ‘What kind of service and support would we get from the big players?’ Over the long term, we wanted the more personal relationship and support,” he says.

The company has worked closely with Expedient to identify the core stack of applications that would need to be recovered, and the work to redesign those applications is 80% complete. “This year, we will be migrating that pod of applications over to Expedient’s data center,” Mathur says. TIC Gums’ DRaaS RTO is less than two hours.

“We will be able to initiate disaster recovery based on standard operating procedures and will be able to bring everything back up from one phone call to Expedient,” he says.

Mathur already has set out goals to test the DRaaS twice a year and to adjust standard operating procedures accordingly. Servers will be moved from tier to tier (each tier denotes how many hours the server can be down) based on the findings of the drills, which are done in partnership with Expedient. Mathur only has to dedicate one system administrator from his team: “95% of disaster recovery is left to the provider,” he says.

AutoNation’s Rasner warns fellow IT professionals not to get complacent. “You still have to push the button and declare a disaster. Then there are things that need to be tested, validated, and, in some cases, manually intervened with,” he says.

In addition, he says, “DRaaS is not one size fits all.” Each application and bit of infrastructure needs to be evaluated, and companies need to consider the appropriateness of capital expenditures versus operational expenditures. How he justifies it: “All you’re doing in disaster recovery is replicating and replicating, and you can do that via DRaaS without incurring the cost of all the heavy infrastructure depreciating and not doing anything value-added.” 

Overall, Rasner is happy with his DRaaS experience: “We’ve tested it, and it is rock solid. Even though it was painful to get here, our disaster recovery is in a much better place than it was.”

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Sandra Gittlen (see source)

DARPA explores new computer architectures to fix security between systems

Solutions are needed to replace the archaic air-gapping of computers used to isolate and protect sensitive defense information, the U.S. Government has decided. Air-gapping, used often now, is the practice of physically isolating data-storing computers from other systems, computers, and networks. It theoretically can’t be compromised because there is nothing between the machines — there are no links into the machines; they’re removed.

However, many say air-gapping is no longer practical, as the cloud and internet takes a hold of massive swaths of data and communications.

“Keeping a system completely disconnected from all means of information transfer is an unrealistic security tactic,” says Defense Advanced Research Projects Agency (DARPA) on its website, announcing an initiative to develop completely new hardware and software that will allow defense communications to take place securely among myriad existing systems, networks, and security protocols.

The Guaranteed Architecture for Physical Security (GAPS) program it is introducing will be split into three formal areas: hardware, software, and validation against Department of Defense (DoD) systems. A fourth realm is also promised, and that’s the commercialization of the elements:

“Commercializing the resulting technologies is also an objective,” the publicly funded DARPA federal agency says. The GAPS program should “create safer commercial systems that could be used for preserving proprietary information and protecting consumer privacy.”

Commercializing something like a defense security architecture — the objective being to secure data as it moves between disparate systems — could ultimately help commerce in a similar way to how the government has assisted the internet by allowing a military-owned, watered-down GPS to be used by all. Getting funding also becomes easier.

“Modern computing systems must be able to communicate with other systems,” DARPA says of its plans. That includes “those with different security requirements.” It’s saying cloud systems and the internet are here, aren't going away, and need to be dealt with, in other words.

The problem with air-gapping

Air-gapping does work. The problem with it, though, is it’s not only hard to implement and enforce (workers have gotten used to networks and cloud), but it’s expensive. Installing breaks between systems not only affects working collaborations, but it’s hard to setup due to overall complexity. And it’s equally difficult to administer: You can’t just send patches across the network — there isn’t one.

“Interfaces to such air-gapped systems are typically added in after the fact and are exceedingly complex, placing undue burden on systems operators as they implement or manage them,” DARPA explains.

A better solution, then, in today's environment is to accept that users need, or want to share, data and to figure out how to keep the important bits more private, particularly as the data crosses networks and systems, with all having varying levels of, and types of, security implementations and ownership.

The GAPS thrust will be in isolating the sensitive “high-risk” transactions and providing what the group calls “physically provable guarantees” or assurances. A new cross-network architecture, tracking, and data security will be developed that creates “protections that can be physically enforced at system runtime.”

How they intend to do that is still to be decided. Radical forms of VPNs — an encrypted pipe through the internet would be today’s attempted solution. Whichever method they choose will be part of a $1.5 billion, five-year investment in government and defense electronics systems. And enterprise and the consumer may benefit.

“As cloud systems proliferate, most people still have some information that they want to physically track, not just entrust to the ether,” says Walter Weiss, DARPA program manager, in the release.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Patrick Nelson (see source)

What programming languages rule the Internet of Things?

As the Internet of Things (IoT) continues to evolve, it can be difficult to track which tools are most popular for different purposes. Similarly, trying to keep tabs on the relative popularity of programming languages can be a complex endeavor with few clear parameters. So, trying to figure out the most popular programming languages among the estimated 6.2 million IoT developers (in 2016) seems doubly fraught — but I’m not going to let that stop me.

There’s not a lot information on the topic, but if you’re willing to look at sources ranging from Medium to Quora to corporate sites and IoT blogs, and you’re willing to go back a few years, you can pick up some common threads.

IoT Developer Survey: Top IoT programming languages

According to the Eclipse Foundation’s 2018 IoT Developer Survey, here are the top IoT programming languages:

  1. Java
  2. C
  3. JavaScript
  4. Python
  5. C++
  6. PHP
  7. C#
  8. Assembler
  9. LUA
  10. Go
  11. R
  12. Swift
  13. Ruby
  14. Rust

Those top four positions haven’t budged since the 2017 IoT Developer Survey, when Java, C, JavaScript, and Python topped the chart.

Looking deeper, though: The 2018 survey also ranked IoT programming languages by where the code will run: in IoT devices, gateways, or the cloud. For devices, C and C++ lead Python and Java, while for gateways, the order is Java, Python, C and C++. In the cloud, it’s Java, JavaScript, Python, and PHP.

Based on that data, according to Chicago-based software shop Intersog, “If it’s a basic sensor, it’s probably using C, as it can work directly with the RAM. For the rest, developers will be able to pick and choose the language that best suits them and the build.” Intersog also cited Assembly language, B#, Go, Parasail, PHP, Rust, and Swift as having IoT applications, depending on the task.

IoT programming languages that pay the most

Back in 2017, IoT World took a different approach, trying to suss out which IoT programming languages pay developers the most. The results?

“Java and C developers can, on average, expect to earn higher salaries than specialists in the other languages used in the IoT, although senior Go coders have the highest salary potential. Skilled Go developers are among the best paid in the industry, even though junior and mid-level Go developers earn modestly compared to their peers.”

App development firm TechAhead, meanwhile, named C, Java, Python, JavaScript, Swift, and PHP as the top six programming languages for IoT projects in 2017.

Finally, over on Quora, the arguments over IoT programming languages continue to rage, with one long-running thread attracting more than 20 answers ranging starting in 2015 and continuing through 2018 (Which programming languages will be most valuable in the IoT). The nominees mostly revolve around the usual suspects, with Java, Python, and C/C++ predominating.

A multilingual future for IoT?

Clearly, there’s a consensus set of top-tier IoT programming languages, but all of the top contenders have their own benefits and use cases. Java, the overall most popular IoT programming language, works in a wide variety of environments — from the backend to mobile apps — and dominates in gateways and in the cloud. C is generally considered the key programming language for embedded IoT devices, while C++ is the most common choice for more complex Linux implementations. Python, meanwhile, is well suited for data-intensive applications.

Given the complexities, maybe IoT for All put it best. The site noted that, “While Java is the most used language for IoT development, JavaScript and Python are close on Java's heels for different subdomains of IoT development.”

Perhaps, the most salient prediction, though, turns up all over the web: IoT development is multi-lingual, and it's likely to remain multi-lingual in the future.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Fredric Paul (see source)

Internet Boom in Brazil Also Drives Private Interconnection

Wednesday, January 30, 2019

SRv6: One Tool to Rule Them All

I got some interesting feedback from one of my readers on Segment Routing with IPv6 extension headers:

Some people position SRv6 as the universal underlay and overlay due to its capabilities for network programming by means of feature+locator SRH separation.

Stupid me replied “SRv6 is NOT an overlay solution but a source routing solution.

So where would I need source routing?

Considering that, where would you need source routing (the ability to specify intermediate hops in the path)? For example, it doesn’t work well with service chaining unless your VNFs support it

There are some supposed use cases where you could use your ISP as global transport backbone even when your end sites are connected to another ISP. This might even make sense…

Then there are actual use cases for source routing in WAN edge of large content providers – they want to send the traffic to from their web proxy servers to some destinations over multiple uplinks. Not surprisingly, every solution along these lines that I’m aware of uses either L2 tricks or MPLS because they work (as opposed to works-with-engineering-code-in-PowerPoint technologies).

Back to one-tool-to-rule-them-all

I should have known better. Here’s what I got back from the same reader:

I guess their point is that in case of SRv6 you get a single mechanism that can be used for underlay (like MPLS transport in fabric) and for overlay (send instructions to particular end points where l3vpn should start for example), so instead of MPLS or VXLAN+IP you get SRv6 with IPv6 :)

RFC 1925 Rule 5 immediately comes to mind:

It is always possible to agglutinate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.

In plain English: Just because you could doesn’t mean that you should.

In a word: Don’t.

Also, if you’re looking for a universal tool, why don’t you start with this one… and once you cut down a tree with it, and make a table out of that tree, while brushing your teeth, cutting your fingernails, and opening a bottle of wine, do let me know how it worked out.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

SGI lives on in the form of a French AI-focused supercomputer

France's IDRIS supercomputing center announced it will deploy a new HPE SGI 8600 supercomputer in June that is capable of reaching 14 petaflops at peak performance, which would put the system in the top 15 of supercomputers in the world, going off the November 2018 list.

Named Jean Zay, after a French politician, the system will be designed specifically for artificial intelligence (AI) workloads. French President Emmanuel Macron has called for a national AI strategy. The system will sport 1,528 Intel Xeon Scalable nodes and 261 GPU nodes, each with four Nvidia Tesla V100 GPUs.

Using all-flash storage from DataDirect Networks, a HPE company, Jean Zay will offer more than 300 GB per second of read/write capacity to power faster simulations. The system will also use direct liquid cooling, yet another system joining the growing parade of systems using liquid instead of air for cooling.

SGI still has an impact

It shows SGI, despite its name all but vanishing from the landscape, is still able to have an impact. SGI was one of the early Unix vendors, specializing in desktop graphics acceleration. SGI workstations were used to create the dinosaurs of “Jurassic Park,” and for a while SGI was as big of a star as any actor.

But the company fell behind Sun Microsystems and IBM and went through a series of failed reinventions. By 2006 it was selling high-performance systems running Linux and Intel processors but adapted to its own technologies, like the NUMALink memory mesh for very high-speed transfers between systems.

In 2009, Rackable Systems bought SGI and adopted that company’s name. It struggled on for a few years before HPE bought it out for a pittance in 2016. HPE adapted NUMALink to its Superdome mission-critical servers but also kept some of the SGI products.

HPE sells the Integrity and Superdome server lines as mission-critical systems, while the SGI 8600 is primarily an AI/HPC design, with Xeon Scalable Processors, the ill-fated Xeon Phi, and Nvidia SXM GPUs all linked by HPE’s Message Passing Interface (MPI), designed to improve the performance of HPC applications in a cluster. It is a mammoth, self-contained system that is the size of a pair of refrigerators.

What the Jean Zay supercomputer will do

The Jean Zay supercomputer “will focus on research across fundamental physical sciences such as particle physics and cosmology, and biological sciences, to foster discoveries in fusion energy, space exploration and climate forecasting while also empowering applied research to optimize areas like combustion engines for automobiles and planes, pharmaceutical drugs, and solutions for natural disasters and disease pandemics,” according to a statement from HPE.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Andy Patrizio (see source)

Cisco goes after industrial IoT

Equinix and Oracle®: Unlocking the Future with Cloud Adjacency

BrandPost: As WAN Services Move to the Network Edge, Top Vendors Emerge

As enterprises come under more pressure to be more agile, efficient, and user-centric, wide-area network (WAN) edge infrastructure services are becoming critical for achieving the digital transformation. By deploying software-defined WAN (SD-WAN) services closer to users’ end locations such as branch offices and connected devices, enterprises can improve application performance, manage data traffic flow, and reduce latency, all while reducing network infrastructure operational and maintenance costs.

A growing number of enterprises are recognizing the competitive benefits of moving WAN services to the network edge. In a Gartner survey, 27% of organizations said they intend to exploit edge computing in 2018 as part of their infrastructure strategies, with that percentage rising to 70% by the end of 2019. According to Gartner’s report, by the end of 2023, more than 90% of WAN edge infrastructure refresh initiatives will be based on virtual customer premises equipment (vCPE) platforms or SD-WAN software/appliances versus traditional routers, up from less than 40% today.

The emergence of this market prompted Gartner to unveil in October 2018 its first-ever “Magic Quadrant” report for WAN edge infrastructure. Among fierce competition in an increasingly crowded market, only 20 vendors made Gartner’s prestigious list. One of those was Huawei, which was named as a challenger in the global WAN edge infrastructure market because of the continuous and rapid growth of its global WAN edge infrastructure market shares, along with its leading SD-WAN solution and products.

Currently, Huawei boasts more than 20,000 WAN edge customers. According to both Gartner and IDC market share reports, Huawei’s enterprise router revenue ranked second worldwide in Q3 2018. Additionally, the IDC report shows that Huawei’s enterprise router revenue ranked first in the Chinese market in Q3 2018.

Huawei’s Intent-Driven SD-WAN Solution accelerates service provisioning with automated deployment throughout the whole process; guarantees optimal cloud service experience through intelligent application optimization; and reduces operations and maintenance (O&M) difficulty by leveraging intelligent O&M technology. This helps enterprises achieve on-demand multibranch-to-multicloud interconnection and construct intent-driven branch interconnection networks featuring high performance, low latency, and zero packet loss. In October 2018, Huawei launched a full line of next-generation SD-WAN routers and innovative SD-WAN cloud service for enterprise customers.

Industry-leading next-generation SD-WAN routers

Huawei’s next-generation AR series SD-WAN routers, including a full line of customer premises equipment (CPEs) and universal CPEs (uCPEs), are mission-critical flagship products for Huawei’s Intent-Driven SD-WAN Solution.

Underneath the hood of Huawei’s brand-new SD-WAN routers are a unique “CPU+NP” hardware-based acceleration engine, a multicore concurrent processing mechanism, and a three-level, high-speed cache. Combined, these offer Huawei customers forwarding performance that is two to eight times the industry average. These routers also can merge multiple service functions, such as routing, switching, software-defined (SD) WAN, WAN optimization controller (WOC), Wi-Fi, voice, virtual private networks (VPN), and security.

Featuring strong networking capabilities, the next-generation SD-WAN routers provide customers with more than 10 types of WAN interfaces (Eth, LTE Cat6, xDSL, xPON, etc.). Huawei’s solution supports more than 20 networking modes (partial-mesh, hub-spoke, full-mesh, hierarchical networking, multibranch-to-multicloud interconnection, reliable multi-CPE multilink networking, etc.) and large-scale SD-WAN deployment with up t0 20,000 CPEs, as well as smooth scale-out through distributed control components such as virtual route reflectors (vRRs).

Additionally, by leveraging Huawei’s vCPE AR1000V, enterprises can achieve interconnection from multiple branches to multiple clouds including Microsoft Azure, AWS, and HUAWEI CLOUD. All of these enable flexible, on-demand interconnections under all scenarios for enterprises.

Huawei’s solution supports multiple zero-touch provisioning (ZTP) modes such as DHCP-, email, and USB-based deployment modes, thus making device plug-and-play a reality. It also supports provisioning of more than 10 mainstream value-added services (VASs) in a matter of minutes, helping enterprises achieve simplified branch network O&M and fast service deployment.

Moreover, Huawei’s next-generation SD-WAN routers allow customers to implement “three intelligences”: intelligent application identification, application-based intelligent traffic steering, and intelligent application acceleration. In terms of intelligent application identification, Huawei’s SD-WAN routers adopt a combination of first-packet identification (FPI), deep-packet inspection (DPI), and self-defined application identification technologies, enabling 100% visibility of network-wide applications.

The application-based intelligent traffic-steering feature based on bandwidth threshold and application quality improves the WAN bandwidth utilization efficiency from 60% to 90%, and guarantees optimal links for critical applications. In terms of intelligent application optimization, Huawei’s solution uses an innovative adaptive forward-error correction (A-FEC) algorithm and three-level hierarchical quality of service (HQoS) technology, thus delivering strong packet loss resistance capability and guaranteeing real-time voice and video experience. For example, even when the packet loss rate is 20%, neither frame freezing nor artifacts will occur.

Huawei’s solution also leverages the next-generation, ultra-high-speed, large-file transfer algorithm: FillP (short for Fill-the-Pipe), which is recognized by Stanford University as having the industry’s highest transfer efficiency of more than 15 large-file transfer algorithms, improving the average transfer control protocol (TCP)-based large-file transfer speed up to 100-fold.

To date, many global carrier, enterprise, and managed service provider (MSP) customers — such as Italy’s TIM, Norway’s Broadnet, and China’s Ping An Group and Fnetlink — have chosen Huawei’s Intent-Driven SD-WAN Solution. For example, China’s Ping An Technology (a wholly-owned subsidiary of and the platform service provider for China’s Ping An Group) leveraged Huawei’s solution in the first half of 2018 to quickly roll out artificial intelligence (AI)-based customer service to 200 branch offices that offer insurance services in the first project stage. This reduced bill issuance time from two hours to just 10 minutes. Further, Ping An Technology was able to significantly lower leased-line costs and improve O&M efficiency while delivering superior customer service.

Innovative SD-WAN cloud service

Huawei has started offering one-stop, on-demand SD-WAN cloud service in the Chinese market. Enterprises can log in to HUAWEI CLOUD anytime, anywhere to purchase most comprehensive resources, including the next-generation AR series high-performance SD-WAN routers, SD-WAN service, VASs, cloud resources, and O&M services provided by professional MSPs.

Leveraging Huawei’s next-generation AR series high-performance SD-WAN routers in combination with Huawei’s cloud-based Agile Controller and professional MSP services, Huawei’s SD-WAN cloud service is suitable for interconnection scenarios of chain enterprises, multibranch large corporations, and small- and medium-sized enterprises.

For enterprises with cross-region or cross-country branches, Huawei’s SD-WAN cloud service uses vCPEs to build cloud sites and intelligently select the optimal link through a large number of global points of presence (POPs) of HUAWEI CLOUD, thus implementing single-hop cloud connection. With the network access speed improved five-fold, Huawei’s next-generation SD-WAN routers provide the ultimate intelligent cloud and network experience for enterprises.

Open SD-WAN ecosystem

Huawei offers its open SD-WAN architecture, which delivers openness capabilities through open universal computing gateways — uCPEs that enable:

  • On-demand provisioning of more than 10 mainstream VASs in minutes
  • Interconnection with third-party application systems through more than 150 types of open application programming interfaces (APIs)
  • Deployment on multiple clouds (Microsoft Azure, AWS, and HUAWEI CLOUD)

 This builds an open industry ecosystem with partners for enterprise customers.

Along with other vendors in the SD-WAN sector such as Microsoft, Riverbed Technology, InfoVista, F5, and the European Advanced Networking Test Center (EANTC), Huawei is proactively improving services to all WAN edge customers by collaborating on an open SD-WAN cloud-network ecosystem.

Huawei’s Intent-Driven SD-WAN Solution and products play a pivotal role in its position as a challenger in the global WAN edge infrastructure market.

According to Gartner’s inaugural “Magic Quadrant” report for WAN edge infrastructure:

“Huawei offers a broad array of infrastructure hardware and software, including for networking, servers, and cloud. Its flagship WAN edge offering is the AR series router, which is available as a software instance, single-instance appliance, and as a vCPE platform. The vendor offers multiple WAN edge network functions including routing, SD-WAN, NGFW, and basic WAN optimization, and the requisite management provisioning, service chaining, and automation delivered by Huawei’s Agile Controller.”

In the report, Huawei stands out with the following strengths:

  • A deep hardware portfolio, with a range of appliance options and support for a wide variety of interfaces, including legacy T1/E1 and embedded LTE
  • Multiple WAN edge functions, including routers, SD-WAN and NGFW, which can all run on its vCPE platform and chained via the Agile Controller. This can simplify WAN edge device sprawl, improve operational agility, and provide cost efficiencies.
  • A large installed base and proven capability to support large WAN deployments beyond 1,000 sites

In short, Huawei's dedication to improving its performance in the WAN edge infrastructure market and its comprehensive portfolio of SD-WAN products and services are positioning Huawei in good stead for future rapid growth in the global market.

For more information on Huawei’s WAN edge offerings, visit these links:

Huawei SD-WAN Solution

Huawei AR Series Routers

Success Cases

Let's block ads! (Why?)


Thanks to Brand Post (see source)

Tuesday, January 29, 2019

BrandPost: Calculate Your ROI With SD-WAN

BrandPost: 6 Ways 802.11ax Gives You Better Wi-Fi Experience

Cisco serves up flexible data-center options

Verizon’s 4Q Results: Fios Posts 2.9% Revenue Growth YoY

NEW YORK — Verizon 4Q Results show 54,000 Fios Internet net additions with Fios total revenue growth of 2.9 percent year over year.

During the same time period, Verizon lost a net of 46,000 Fios Video connections, continuing to reflect the shift from traditional linear video to over-the-top offerings. At year-end 2018, Verizon had 6.1 million Fios Internet connections and 4.5 million Fios Video connections.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

Achieving Multicloud Private Connectivity to Cloud Providers

The race to lock down industrial control systems | Salted Hash Ep 44

[unable to retrieve full-text content]

sample-of-all-graphics.00_14_06_21.still

Guest host Juliet Beauchamp and CSO senior writer J.M. Porup talk about the challenges around securing the systems and networks used to control industrial plants and infrastructures.
Thanks to (see source)

Cisco exec airs 2019 cloud directions

Kubernetes, legacy application integration and edge computing technologies are but a few of the technologies Cisco sees as having some of the greatest impact on cloud computing in 2019.

 Cisco aggressively ramped up its own  cloud presence in 2018 with all manner of support stemming from an agreement with Amazon Web Services (AWS) that will offer enterprise customers an integrated platform that promises to help them more simply build, secure and connect Kubernetes clusters across private data centers and the AWS cloud. The joint Google and Cisco Kubernetes platform for enterprise customers also moved along in 2018.

Kip Compton senior vice president of Cisco’s Cloud Platform and Solutions Group recently blogged about how he sees the cloud computing market evolving in in 2019.  Cloud is a catalyst for changing how enterprises will do business in the emerging global digital economy and shows no signs of weakening in 2019, he wrote.

Compton put forth the following key trends in his blog:

Kubernetes

Certainly the darling of the cloud/container world of 2018, the popularity of Kubernetes is growing. The Cloud Foundry Foundation recently released the results of a new survey that found among other things that 45 percent of companies are doing at least some cloud-native app development, and 40 percent are doing some re-architecting/refactoring of their legacy apps.

Compton noted that almost every major cloud player announce a Kubernetes service in 2018.

“As enterprises transform and modernize their technology, Kubernetes is rapidly becoming an essential part of these transformations, enabling speed in developing and deploying applications across both private and public clouds,” Compton stated.   

Compton stated that as customers deploy Kubernetes into mainstream production environments this trend will grow. 

Legacy applications

Legacy application have been viewed as a bugaboo for cloud operations but they aren’t going anywhere.  Most enterprises will continue to develop applications that stay on premises for a period of time, but they will realize the business value in taking an on-premises resource and giving it a facelift with new capabilities that reside in different clouds, Compton wrote.   

“New technologies like service mesh and Istio will mature, support, and enhance legacy applications by binding to innovative services residing on other environments.  As enterprises look to move forward with cloud, the looming task of altering or replacing hundreds of thousands of legacy applications is a huge cost and time investment." Istio software helps set up and manage a network of microservices or service mesh.

This is the year enterprises will need to recognize the careful planning and significant resources to effectively tackle the legacy application behemoth, he stated.

Data transparency: In the coming year we will see a growing trend toward enabling applications from anywhere to access data from anywhere, Compton wrote.  

“With businesses accelerating application development and deploying in multicloud environments, there has been a great deal of innovation around cloud management platforms that offer a degree of cloud abstraction and microservices,” he said.

Whether due to latency, regulations, or data gravity, businesses have often been unable or unwilling to relocate large amounts of data. This reluctance has often been in conflict with the desire to deploy and manage applications across a variety of private data center, clouds, and the edge, Compton wrote.  

Edge computing

Over the course of the year, processing functions will move closer to the cloud edge and the data source to minimize latency on the applications.

“Enterprises will need to think about how they take capabilities beyond the data center, across WAN networks, and into increasingly smaller cloud edge deployments, closer to the end user or data collection point,” he said.
Doing this will lead to faster decision-making, and faster innovation will result, Compton said.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

Handy, fast, free: network tools for your iPhone and Android phone

Not So Fast Ansible, Cisco IOS Can’t Keep Up…

Remember how earlier releases of Nexus-OS started dropping configuration commands if you were typing them too quickly (and how it was declared a feature ;)?

Mark Fergusson had a similar experience on Cisco IOS. All he wanted to do was to use Ansible to configure a VRF, an interface in the VRF, and OSPF routing process on Cisco CSR 1000v running software release 15.5(3).

Here’s what he was trying to deploy. Looks like a configuration straight out of an MPLS book, right?

ip vrf Customer_A
 rd 65000:1
 route-target import 65000:1
 route-target export 65000:1
!
interface GigabitEthernet1.146
 ip vrf forwarding Customer_A
 ip address 155.1.146.1 255.255.255.0
 ip ospf 2 area 0

router ospf 2 vrf Customer_A
  router-id 155.1.146.1
  redistribute bgp 65000 subnets

Guess what… when he tried to push that configuration to his CSR 1000v with Ansible ios_config module the in-VRF OSPF router process refused to start claiming it cannot get a router ID (%OSPF-4-NORTRID: OSPF process 2 failed to allocate unique router-id and cannot start). The whole thing worked when he tried to configure OSPF a bit later – looks like it takes some time to get a subinterface ready after it’s been configured, and if you’re typing too quickly you’re out of luck.

Keep in mind that Ansible uses SSH session to configure a Cisco IOS device, so it’s doing the exact same thing as if you’d be a really fast typist.

I could see two immediate solutions to this problem:

  • Get a router that has a decent API and all-or-nothing commit mechanism;
  • Split the configuration in two parts and push them to the device using two ios_config calls. It takes Ansible long enough to work through its gazillion layers of abstraction for Cisco IOS to realize what just happened.

On a somewhat tangential note, a friend of mine called the current state of network automation “Unix Scripting in 1970s”. Unfortunately he wasn’t too far off…


To learn more about network automation with Ansible start with our Ansible for Networking Engineers webinar and when you’re ready to move from writing playbooks to building solutions enroll into Building Network Automation Solutions online course.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

Monday, January 28, 2019

You can sign up for this team collaboration app for free today

Efficiency is the name of the game in today’s fast-paced digital world; and whether you’re leading a team—or an entire company—you should always be looking for new and creative ways to get more done in less time. That’s the goal, right? While there are plenty of pricey tools like Slack and Skype that promise to fine-tune your productivity, few are as quick and impactful as Glip, and it won’t cost you a dime.

With free chat, file sharing, and task management features, Glip is a collaboration tool that helps cut down on information overload and email back-and-forth. With Glip you can chat in real-time via text or video, you can even share your screen wherever you are. You can create and manage tasks, and instantly share and collaborate on files.

With 96% of users saying Glip has made their communications easier and 64% saying they deliver projects faster, Glip is a no-brainer for helping your team reach its full potential. Access Glip on any device, with apps for both mobile and desktop, iOS and Android. Glip is free to use and with as many users as you want. You can get started for free (yes, free) today.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to DealPost Team (see source)

North America to Remain the Largest Multiplay Service Market through 2023

LONDON — The number of multiplay households in the Americas subscribing to double, triple or quadruple play service bundles reached an estimated 142 million in 2018, equating to a household penetration of 43 percent, just behind Europe with 48 percent, according to GlobalData, a data and analytics company.

Africa and Middle East, with a household penetration level of 5 percent, is the region exhibiting the lowest multiplay adoption. At a sub-regional level, however, there is a great disparity in terms of multiplay service adoption within the Americas, with penetration of households reaching 30 percent in Latin America compared to 61 percent in North America as of the end of 2018.

vcsPRAsset_3434416_94590_821f4ad7-886f-4d4b-9d06-d6de5f0b5937_0

“Robust fixed broadband infrastructure and extensive network coverage have led to a greater adoption of multiplay services in North America,” says Eulalia Marin-Sorribes, technology analyst at GlobalData. “In addition, discounted pricing adopted by operators and the inclusion of value-added services as part of multiplay bundles have also played an important role, helping to rise adoption levels historically. However, market maturity and declining customer interest in linear pay-TV amid the proliferation of over the top (OTT) video applications have started to slow down adoption in more recent years.”

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

Data Privacy Day: 3 Trends to Watch

Build security into your IoT plan or risk attack

The Internet of Things (IoT) is no longer some futuristic thing that’s years off from being something IT leaders need to be concerned with. The IoT era has arrived. In fact, Gartner forecasts there will be 20.4 billion connected devices globally by 2020.

An alternative proof point is the fact that when I talk with people about their company's IoT plans, they don’t look at me like a deer in headlights as they did a few years ago. In fact, often the term “IoT” doesn’t even come up. Businesses are connecting more “things” to create new processes, improve efficiency, or improve customer service.

As they do, though, new security challenges arise. One of which is there's no “easy button.” IT professionals can’t just deploy some kind of black box and have everything be protected. Securing the IoT is a multi-faceted problem with many factors to consider, and it must be built into any IoT plan.

Top challenges associated with securing IoT endpoints

  • Physical security is overlooked. Businesses devote a significant amount of time and energy to cybersecurity. However, physical security is often an afterthought or overlooked altogether. Devices need to be protected against theft or hacking of the hardware. Because IoT is often deployed by non-IT individuals, there can be many devices that IT departments are unaware of. These unknown devices can be breached from a console or USB port and create backdoors into other networks. IT and cybersecurity teams need a better way of automating the discovery of IoT endpoints.
  • Traditional security doesn’t work with IoT. Today’s cybersecurity is primarily focused on protecting the perimeter of a network with a large, expensive firewall, but ZK Research found only 27 percent of breaches occur there. (Note: I am an employee of ZK Research.) Although firewalls are still required to protect the network, IoT devices enable breaches to occur inside the network. IoT requires organizations to rethink their security strategies and focus on the internal network. Another factor with IoT devices is that many connect back to a cloud service to provide status updates or provide other information. This punches a legitimate but hackable hole through the firewall from the inside.
  • Many IoT devices are inherently insecure. Most IT endpoints such as PCs and mobile devices have some embedded security capabilities or can have an agent placed on them. While many IoT devices have old operating systems, embedded passwords, and no ability to be secured by a resident agent. This underscores the importance of rethinking security in a world where everything is connected. If the endpoint can’t be secured, then protection needs to move to the network. 
  • Cybersecurity is growing in complexity. Protecting against external threats used to be a straightforward process: Place a state-of-the-art firewall at the perimeter, and trust everything inside of the network. That made sense when all the applications and endpoints were under the control of the IT department. Today, however, workers bring in their own devices, and the use of cloud services is extensive, creating new entry points. To combat this, security teams have been deploying more niche point products, which often increases the level of complexity. My research has found that organizations use an average of 32 security vendors, and this number is growing — leading to an environment that is becoming increasingly complex and less secure. Also, IT departments struggle today to manage the current set of connected devices. Adding three to five times more endpoints will overwhelm many security teams.
  • The number of blind spots has exploded. Cobbling together a patchwork of security tools from different vendors may seem like a sound strategy, as each device was meant to solve a specific problem. However, this approach leaves massive blind spots because the devices have little to no communications among them. Also, this architecture lacks automation, so the configuration of these devices must be done one at a time, meaning changes can often take months to implement. This delay puts organizations at serious risk.

Failure to have a comprehensive IoT strategy puts businesses at risk

It’s important to understand how big the risk is of not having a comprehensive IoT security strategy. Success with IoT requires a number of processes work together. A breach at any point can cause an outage and a possible loss of sensitive data. In many verticals, such as healthcare, state and local government, manufacturing and banking, IoT services are mission critical, so any kind of outage can cost companies millions. Indeed, in May 2016, the Ponemon Institute found the average cost of a data breach to be $3.62 million, up from $3.5 million in 2015.  

There is tremendous business value in IoT, and I strongly recommend businesses be aggressive with deployments. However, I also advise building security into the plan instead of trying to implement it after deployment.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Zeus Kerravala (see source)

IDG Contributor Network: The cloud-based provider: Not your grandfather’s MNS

Today, the wide area network (WAN) is a vital enterprise resource. Its uptime, often targeting availability of 99.999%, is essential to maintain the productivity of employees and partners and also for maintaining the business’s competitive edge.

Historically, enterprises had two options for WAN management models — do it yourself (DIY) and a managed network service (MNS). Under the DIY model, the IT networking and security teams build the WAN by integrating multiple components including MPLS service providers, internet service providers (ISPs), edge routers, WAN optimizers and firewalls.

The components are responsible for keeping that infrastructure current and optimized. They configure and adjust the network for changes, troubleshoot outages and ensure that the network is secure. Since this is not a trivial task, therefore many organizations have switched to an MNS. The enterprises outsource the buildout, configuration and on-going management often to a regional telco.

From various discussions with analysts, it has been found that in the past, 70% of European businesses preferred the MNS model but only 30% of US businesses chose MNS over DIY. The recent Gartner paper indicates that enterprises are increasingly leaning towards the MNS model citing that MNS adoption has risen globally by 20% since 2016.

MNS has existed for over 25 years. What drives this recent change in customer’s preference?

The shape of business is changing

The on-premises centric network design focused mostly on physical locations – such as data centers, branches and regional hubs – are no longer compatible with the digital business. The enterprise’s assets reside in a hybrid compute environment that includes multiple cloud data centers. These data centers are accessed by both mobile users and office users, globally.

Digital business demands higher agility and automation

Enterprises are investing in new technologies for WAN automation, such as SD-WAN. They are expanding their connectivity portfolios from MPLS to Internet-based connectivity (broadband and LTE). Such technological changes boost the network capacity, resiliency and availability while reducing the cost per Mbps.

The cloud is changing the perception of IT service delivery

The expectations of the business from IT have been impacted by changes in both; the consumer and business realms. Apparently, in our personal lives, we have been getting used to high-quality self-service engagement models to get almost anything done in minutes. Self-service has also made its way into the workplace through DevOps: the ability to instantly access and compute the storage resources on demand. Waiting or requiring manual intervention, for anything, is becoming an exception these days.

MNS seems like a good way to address these changes by shifting the WAN overhaul into the hands of service experts. But, will the traditional MNS, delivered by a telco, provide the right mix of self-service, agility and expertise, as expected by the digital business?

Gartner has indicated that the enterprise’s preference of giving MNS responsibility to legacy telcos is shifting in favor of non-traditional service providers.

What is the reason behind this delegation? The telcos are built around a process that promotes stability and availability above all other service characteristics, like speed and agility. They provide little visibility and control to the end customer and require the support tickets to be created for almost all the changes. This way, telcos can achieve total control of the network and reduce the downtime risk marginally. But, in this era of fast-changing business conditions, customers want both availability and speed to go hand in hand.

Furthermore, since the networks are composed of multiple components from 3rd party providers, deep component level expertise is not readily accessible, leaving telcos heavily dependent on their solution providers. This is particularly true for newer components, like the edge SD-WAN. This result is slower troubleshooting and longer recovery from outages.

In a nutshell, the telco architecture limits their ability to deliver self-service, quick resolution and evolve the technology stack. In that context, you simply can't expect your network to meet the “DevOps standard” of speed and agility.

However, the answer to this limiting situation is what I like to call cloud-based providers. The cloud-based provider or cloud-native carrier was built from the ground up to optimize speed, agility and cost while delivering a flexible management model. Let’s look at some of the significant factors that make cloud-based providers the ideal choice:

  • Cloud-based providers have no last-mile agenda. Unlike legacy MPLS telcos, they are transport agnostic. The customer can choose any type of the last mile with multiple redundant links and at varying degree of quality. These can include 4G/LTE transports that can overcome physical damage at the customer premise and provide an instant-on option for new locations. Besides, sites can be launched with any form of last-mile connectivity without waiting for the telco-provided premium connectivity option.
  • Cloud-based providers leverage new choices for global connectivity (“middle mile”). Unlike global telcos that rely on their expensive fibers, they create on an overlay on top of multiple, SLA-backed and Tier-1 connectivity providers. Significantly, by buying massive capacity at wholesale prices, which are not otherwise available for most enterprises, the cloud-based provider maintains a very competitive price point. And, the use of multiple SLA-backed providers creates a level of core resiliency. Contrarily, this is impossible in case of legacy telcos and predictable performance is unavailable with the Internet.
  • Cloud-based providers use its own software stack. Much like Amazon AWS, the cloud-based provider fully owns and controls its technology. It includes full edge-SD WAN capabilities and an affordable global private backbone with built-in enterprise-grade network security and WAN optimization. The software stack is globally distributed over dozens of Points-of-Presence (PoPs) that seamlessly extend the full set of capabilities to any customer resource. Additionally, the platform ownership allows the cloud-based provider to address customer needs through rapid software evolution. Therefore, they can totally control service delivery and optimize the network end-to-end.
  • Cloud-based providers expert support is built-in. A fully owned, cloud-based service enables the optimal alignment of the support, operations and engineering teams. All teams directly work on the service, using the full organizational knowledge to rapidly address service disruptions. This is particularly crucial when the outages aren’t caused by mere operational hiccups but also result from software bugs. Support teams can use the engineering-level tools to perform a very detailed diagnosis; the engineering benefit by gaining full access to the problem. As a result, this minimizes the time taken to resolve the issue. This is a contrast with telco-to-vendor communication that needs to occur every time a possible bug is encountered.
  • Cloud-based providers were designed for the cloud. A big driver for the SD-WAN deployment is to optimize cloud-bound traffic. Since the cloud-based provider lives in cloud data centers that are shared by many cloud providers, its global cloud network seamlessly optimizes traffic from any customer location going to cloud destinations. This means, that directing cloud traffic through the cloud-based providers will actually provide better performance than the primary MPLS and not just the basic value of MPLS offload.
  • Cloud-based providers provide flexible management options. For maximum agility, self-service portals enable the customers to make policy and configuration changes to the network. They keep the network technology stack current and provides 24x7 monitoring. Customers can opt for a fully managed service model or retain some or most of the WAN control.

The customer tendency towards MNS is understandable in regard to the pace and scope of changes to the WAN. Ideally, meeting the business objectives of speed, agility and cost containment boil down to two architectural approaches: the legacy telco vs. the cloud-based provider.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Matt Conran (see source)

The IoT brings targeted advertising into retail stores

The Internet of Things (IoT) is everywhere these days, from smart houses to smart cities to industrial applications. And now it’s coming to the coolers in a drugstore near you.

Walgreens is testing innovative, IoT-powered "smart coolers" that combine cameras, facial recognition software, and display screens in the cooler doors to serve targeted ads depending on what it can tell about shoppers rooting around for cold drinks and frozen treats.

Bringing the online ad experience in store

According to the Wall Street Journal, the system attempts to recreate the online advertising experience in brick and mortar stores, using facial recognition software to determine the age of the shopper and what products they’ve already selected — as well as environmental factors — to determine what ads to show. Supplied by Chicago-based Cooler Screens, the technology is designed to transform “retail cooler surfaces into IoT-enabled screens and [create] the largest retail point-of-sale merchandising platform in the world.”

This new technology could provide brick-and-mortar stores with a marketplace similar to online advertising,” the Journal said. Ice cream brands could duke it out to get the most prominent placement when it is 97 degrees outside; an older man could see ads for different products than a younger woman.”

Even better, perhaps, the cameras can tell whether shoppers pick up items highlighted in the ad, giving advertisers instant feedback on the effectiveness of their smart cooler ads.

Some 15 large advertisers have reportedly signed up for the Chicago test, which began in November, the Journal reports, and the drugstore chain plans to roll it out to stores in San Francisco, New York, and Seattle this month.

Where do smart coolers fall on the creepy meter?

The pilot program doesn’t rely on selling user data and claims it captures and stores only anonymous metadata. The Journal quotes Arsen Avakian, co-founder and CEO of Cooler Screens, saying, “The business model is not built on selling consumer data. The business model is built on providing intelligence to brands and to the retailers to craft a much better shopping experience.”

Avakian told the Journal that surveys indicate shoppers like the IoT coolers better than ordinary ones, but if you ask me, this project still runs a serious risk of being perceived as intrusive and creepy. Walgreens is apparently worried enough to post privacy notices and station a “concierge” nearby to answer any questions.

Good idea.

Not everyone wants their age and gender judged by a machine — and some folks don’t identify with binary gender choices. It’s not clear how accurate the system is, but facial recognition algorithms have well-documented issues with some populations. And just because the companies aren’t actively selling the data, that doesn’t mean there aren't privacy concerns. Worse, what if software failures lead to showing shoppers wildly inappropriate ads? That could get ugly fast.

More than just ads?

I hate to be cynical, but here’s an idea that could help short-circuit any potential issues. If the coolers were really smart, instead of using the system to show ads, they’d be programmed to offer discounts and coupons. After all, people have repeatedly proven to be willing to put up with amazing amounts of creepy potential embarrassment for the chance to save a few bucks on groceries and sundries. Heck, for 20 percent off, I probably would, too.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Fredric Paul (see source)

Is jumping ahead to Wi-Fi 6 the right move?

In five years, all you’re going to find is Wi-Fi 6, or what most wireless experts are still calling 802.11ax. But five years is a long time. If you’re considering an early move toward the most cutting-edge Wi-Fi technology on the market, there are some hurdles that you’ll have to overcome.

The first is the preliminary state of the technology. Every access point (AP) on the market that’s being sold as 11ax or Wi-Fi 6 is pre-standard gear, given that that the Wi-Fi Alliance hasn’t yet finalized the standard. That poses potential interoperability issues down the line, according to analysts. And of course, there are no Wi-Fi 6-ready smartphones, laptops or other client devices available yet.


Why upgrade to 802.11ax?

It’s important to understand why you’re updating, as well, noted Gartner senior principal analyst Bill Menezes. The main reason to upgrade now is futureproofing, given the lack of Wi-Fi 6-capable client devices on the market.

“You’re really just putting it in there because you got a great price on it, or you’ve decided, ‘Well, I may not have the money in five years to upgrade,’” he said.

It’s worth noting, however, that there are particular use cases that could benefit from Wi-Fi 6-ready hardware more than others. The one that comes up more than any other – unsurprisingly, given the technology’s focus on efficiently connecting to large numbers of client devices via a single AP – is the hospitality and entertainment sector. Sports stadiums, convention halls and other large event venues are likely to benefit from Wi-Fi 6’s ability to handle lots of connections per endpoint.

Beyond that, however, there’s little consensus among experts about early adoption of 802.11ax among particular verticals. Some analysts posit a use case for IoT – particularly industrial IoT – using that high connections-per-AP ability to connect sensors together, but others note that Wi-Fi is unlikely to be the preferred connectivity medium for the actual sensors. That’s not to say that edge gateway devices won’t use Wi-Fi to connect back to clouds or data centers, but the gateways themselves are more likely to use slightly more specialized technology – usually some form of low-power WAN – to communicate with the sensors directly.

Compatibility could be an issue for early adopters

Another potential hurdle is the difference between pre-standard hardware and standard-ready hardware. Certain gear being sold as Wi-Fi 6 now might not be guaranteed to fully comply with the final 802.11ax standard, which the Wi-Fi Alliance is expected to finalize near the end of 2019.

Let's block ads! (Why?)


Thanks to Jon Gold (see source)

Last Week on ipSpace.net (2019W4)

The crazy pace of webinar sessions continued last week. Howard Marks continued his deep dive into Hyper-Converged Infrastructure, this time focusing on go-to-market strategies, failure resiliency with replicas and local RAID, and the eternal debate (if you happen to be working for a certain $vendor) whether it’s better to run your HCI code in a VM and not in hypervisor kernel like your competitor does. He concluded with the description of what major players (VMware VSAN, Nutanix and HPE Simplivity) do.

On Thursday I started my Ansible 2.7 Updates saga, describing how network_cli plugin works, how they implemented generic CLI modules, how to use SSH keys or usernames and passwords for authentication (and how to make them secure), and how to execute commands on network devices (including an introduction into the gory details of parsing text outputs, JSON or XML).

The last thing I managed to cover was the cli_command module and how you can use it to execute any command on a network device… and then I ran out of time. We’ll continue with sample playbooks and network device configurations on February 12th.

You can get access to both webinars with Standard ipSpace.net subscription.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

Friday, January 25, 2019

Comcast’s Q4 2018: Broadband Revenue Rises 10.1% to $4.4 Billion

PHILADELPHIA —Comcast high-speed internet revenue increased 10.1 percent, driven by an increase in the number of residential high-speed internet customers and rate adjustments, according to Comcast Corporation reported results for year ending December 31, 2018.

For the twelve months ended December 31, 2018, Cable revenue increased 3.9 percent to $55.1 billion compared to 2017, primarily driven by growth in high-speed internet, business services and advertising revenue, partially offset by a decrease in video revenue.

For the year ended December 31, 2018, total customer relationships increased by 1.0 million. Residential customer relationships increased by 878,000 and business customer relationships increased by 123,000. Total high-speed internet customer net additions were 1.4 million, total video customer net losses were 370,000, total voice customer net losses were 103,000 and total security and automation customer net additions were 186,000.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

What is a firewall? How they work and how they fit into enterprise security

Hargray Acquires New Colocation Facilities to Support Local Business

MACON, GA and SAVANNAH, GA — Hargray, a regional communications provider and metro-fiber over-builder, announced the acquisition of three colocation facilities to offer advanced data security and computer hardware storage to businesses across Georgia, South Carolina, and northern Florida. Hargray’s acquisition of these facilities is part of the company’s commitment to support continued economic growth and in response to a demonstrated need by area businesses to secure and back-up critical data.

Hargray’s newest colocation facilities offer secure space to businesses across the southeast. These capabilities are the latest additions to Hargray’s existing facilities and growing network, giving customers improved access for remote and geographically redundant locations that offer structurally secure facilities, equipment racks, cooling, power, bandwidth, physical security and ongoing maintenance.

Currently, area businesses in a variety of industries, including large wireless carriers, healthcare, government and education, are leveraging Hargray’s colocation capabilities to maintain critical applications in facilities that are typically only available in cities like Atlanta or Jacksonville. Now, businesses of every size can benefit from colocation for disaster recovery and as an alternative to storing critical data and equipment in their own offices, reducing business and financial risk while optimizing performance and productivity.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)