Wednesday, October 31, 2018

Democrats Want FCC Inspector General to Investigate Fake Net Neutrality Comments

Blumenthal

Three Democratic senators are calling for an investigation into why nearly 10 million phony net neutrality comments were allowed to be included on the record as part of the Republican majority decision to rescind the rules in early 2018.

Sens. Ed Markey (Mass.), Richard Blumenthal (Conn.), and Brian Schatz (Hawaii), jointly signed a letter addressed to the FCC’s Inspector General claiming the net neutrality matter was likely clouded by industry-funded lobbyists and astroturf groups, possible Russian interference, and intransigence by Republican FCC officials unwilling or unable to investigate the phony comments.

The New York Attorney General’s office has made significant progress in its own independent investigation, identifying 14 so-called “groups of interest” that could have subverted the net neutrality debate with fake comments from non-existent individuals, comments from those whose identities had been stolen, duplicate comments, and signatures on questionnaires and petitions that may have misled the public about the definition of net neutrality.

New York subpoenaed industry-friendly special interest, lobbying, and public strategy groups including: Broadband for America, the Center for Individual Freedom, Century Strategies, CQ Roll Call, LCX Digital, Media Bridge, the Taxpayers Protection Alliance and Vertical Strategies.

Markey

Freedom of Information requests and the ongoing investigation uncovered multiple historical instances of manipulation and potentially counterfeit comments, according to the senators:

  • CQ Roll Call submitted “millions of individual comments” on behalf of a paid client in the broadband privacy docket.
  • In 2014, Broadband for America claimed many community organizations, veterans groups, and small businesses were opposed to net neutrality, but in fact these groups had no position on the issue and in some instances claimed they never heard of Broadband for America.
  • Media Bridge was involved in assisting a group called American Commitment to flood the net neutrality docket with duplicative comments hostile to net neutrality. Media Bridge sells companies on manipulating the public debate on issues, claiming “if your organization wants to stop ‘showing’ and start dominating the issues, pick up the phone and give Media Bridge a call.”
  • The Center for Individual Freedom was responsible for submitting comments that repeated the inflammatory phrase, “unprecedented regulatory power the Obama administration imposed on the internet.” A Wall Street Journal investigation found that 72% of those comments may have been falsely submitted.

Schatz

“The Commission’s apparent disinterest in investigating fraudulent comments risks undermining public trust in the FCC’s rule-making process. Presently, the only efforts at accountability have been led by the New York State Attorney General and the Government Accountability Office (GAO), prompted by a request from Congress,” the senators’ letter reads. “The status of cooperation with both is unclear, and the FCC has previously resisted requests from the NY AG. Moreover, while journalists have sought to conduct their own research through FOIA requests, the Commission has ignored those requests and withheld documents under dubious exemption claims. Given the seriousness of this issue, the FCC should respond transparently and thoroughly, and fully cooperate with all attempts to investigate fraudulent comments.”

The senators are requesting the FCC’s Inspector General investigate:

  • What policies are in place at the FCC to investigate and address fake comments?
  • When did the FCC first become aware of the fraudulent comments?
  • Was the FCC aware of the sources of these comments, and did they investigate them?
  • Is the FCC fully cooperating with the NY Attorney General and GAO and is the agency turning over requested documents? If not, why?
  • What is the status of FOIA requests at the FCC. Are they being handled in a timely and responsive manner? Were denials and exemptions appropriate?

Let's block ads! (Why?)

Cray introduces a multi-CPU supercomputer design

Supercomputer maker Cray announced what it calls its last supercomputer architecture before entering the era of exascale computing. It is code-named “Shasta,” and the Department of Energy, already a regular customer of supercomputing, said it will be the first to deploy it, in 2020.

The Shasta architecture is unique in that it will be the first server (unless someone beats Cray to it) to support multiple processor types. Users will be able to deploy a mix of x86, GPU, ARM and FPGA processors in a single system.

Up to now, servers either came with x86 or, in a few select cases, ARM processors, with GPUs and FPGAs as add-in cards plugged into PCI Express slots. This will be the first case of fully native onboard processors, and I hardly expect Cray to be alone in using this design.

Also beefing up the system is the use of three distinct interconnects. Shasta will feature a new Cray-designed interconnect technology called Slingshot, which the company claims is both faster and more flexible than other protocols for interconnecting, along with Intel’s Omni-Path technology and Mellanox’s Infiniband.

There has been an effort to improve interconnect technology, since communication between processors and memory is often the source of slowdown. Processors, while not growing at the rate of Moore’s Law anymore, are still left waiting to hear from other processors and memory, so expanding the interconnects has been a growing effort.

Slingshot is a high-speed, purpose-built supercomputing interconnect that Cray claims will offer up to five times more bandwidth per node than existing interconnects and is designed for data-centric computing.  

Slingshot will feature Ethernet compatibility, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality-of-service capabilities. Support for both IP-routed and remote memory operations will broaden the range of applications beyond traditional modeling and simulation. Reduction in the network diameter from five hops in the current Cray XC generation of supercomputers to three will reduce latency and power while improving sustained bandwidth and reliability.

Cray is looking beyond just the HPC market with Shasta, though. It’s targeting modeling, simulation, AI and analytics workloads — all data-centric enterprise workloads — and says the design of Shasta allows it to run diverse workloads and workflows all on one system, all at the same time. Shasta’s hardware and software designs are meant to tackle the bottlenecks and other manageability issues that emerge as systems scale up.

Slingshot’s architecture is designed for applications that deal with massive amounts of data and need to run across large numbers of processors, like AI, big data and analytics to provide synchronization across all processors.

One sign that Cray is targeting the enterprise is Shasta has the option of industry-standard 19-inch cabinets instead of Cray’s custom supercomputer cabinets, and it supports Ethernet, the data center standard for interconnectivity, along with the standard supercomputer interconnects.

A supercomputer company pushing down into the enterprise will certainly force HPE, Dell, Cisco and the white-box vendors to up their game quite a bit.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

Fiber breakthrough will run networks 100x faster

A kind of twisting of light beams, within a fiber optic cable, rather than the sending of them linearly will let computer systems, and the internet overall, run faster, according to researchers who have just announced new findings. The group reckon they could speed up the internet a hundred-fold using the twisted technique.

“What we’ve managed to do is accurately transmit data via light at its highest capacity in a way that will allow us to massively increase our bandwidth,” Dr. Haoran Ren, of Australia’s RMIT University, said in a press release.

The corkscrewing configuration, in development over the last few years and now recently physically miniaturized, uses a technique called orbital angular momentum (OAM).

I wrote about this spiral concept a few years ago: Scientists then obtained speeds of 2.56 terabits per second (2,560 gigabits per second) using light wave signals spun into a corkscrew shape. They found that that an open-air experiment they had tried was unwieldly, though. So, researchers then migrated the technique to the more manageable medium of radio. There they obtained 32 gigabits per second.

RITT, however, says it can now import OAM spirals into fiber, and do it economically, in part because it has miniaturized the equipment needed. It claims this will make OAM fundamentally viable and make it usable in conventional, existing data links.

“It fits the scale of existing fiber technology and could be applied to increase the bandwidth, or potentially the processing speed, of that fiber by over 100 times within the next couple of years,” Professor Min Gu, of the school, said. “This easy scalability and the massive impact it will have on telecommunications is what’s so exciting.” Fiber isn’t going anywhere. Even if radio becomes more important, such as in 5G networks, fiber is still needed for backhaul.

The school doesn’t say what speed it has gotten, or will obtain, other than using the 100x figure.

But, in part, it’s a new miniaturization of the equipment that’s the big deal. Previous experiments, by various global academic teams, dating back to at least 2013, have involved larger equipment for the transmission and decoding. RITT says the former gear would not have been practical for current telco environments. RITT, however, says the newly shrunk spiraling, speed-inducing equipment, is nanoscale.

“To do this previously would require a machine the size of a table, which is completely impractical for telecommunications,” Ren says in the release.

Interestingly, DNA concepts are behind this spiraling data invention. ““It’s like DNA, if you look at the double helix spiral,” Gu said, quoted by the Guardian newspaper in its coverage. “The more you can use angular momentum the more information you can carry.” The receiver then untwists the beams at the end of the transmission, thus recovering the data.

Spiraling therefore becomes another dimension in the fiber, creating the opportunity to encode more data. “Multiplexing allows data to be encoded in different modes of light such as polarization, wavelength, amplitude, and phase to be sent down the fibers,” the American Association for the Advancement of Science’s Science publication said of traditional fiber maximization methods, in an abstract of an unrelated 2013 OAM study.

“OAM can provide another degree of freedom whereby the photons are given a well-defined twist or helicity,” Science said. More data in the pipe at any one time, in other words.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

BrandPost: Silver Peak Named a Leader in 2018 Gartner WAN Edge Infrastructure Magic Quadrant

In Another Trailblazing Move, Equinix Opens Zombie-Driven Data Center

Govt responds to future broadband assessment

The government has issued an interim response to a recent assessment of the UK's future broadband needs published earlier this year, which had called for a national broadband plan for the future of the technology to be implemented by spring next year.

The report, published by the National Infrastructure Commission also called for nationwide coverage of full fibre broadband to be in place by 2033, a target the government has committed to meeting.

It was reported by ISPReview.co.uk that while the response to the report does not contain much in the way of new information, it reinforces the government's aims when it comes to improving the nation's broadband connectivity.

The document, published by the Treasury as part of its wider response to July's National Infrastructure Assessment, notes that Westminster has already committed £1 billion to developing the next generation of connectivity.

"In the coming decades our digital infrastructure will be key to driving economic growth," it stated. "This is why the government is committed to achieving a nationwide full fibre broadband network by 2033, and for the majority of the population to be covered by a 5G signal by 2027."

Let's block ads! (Why?)

Tuesday, October 30, 2018

Four Key Reasons Why Digital Transformation is not just a Buzz Word: Part One

Ajit Pai Plans to Remain as FCC Chairman “For the Foreseeable Future”

Pai

Despite the potential for a Democratic Party takeover of the U.S. House of Representatives that is likely to usher in a new era of more aggressive oversight of the Republican-dominated Federal Communications Commission, current chairman Ajit Pai “plans to lead the FCC for the foreseeable future.”

Multichannel News reports Pai is unlikely to leave his post just two years after being appointed to the position by President Donald Trump, despite an ethics controversy over alleged assistance given to Sinclair Broadcast Group to allow the company to acquire more stations despite a federal ownership cap on the number of stations that can be owned by a single entity. Pai also was responsible for a highly controversial decision to cancel net neutrality provisions enacted during the Obama Administration.

“Chairman Pai remains focused on his key priorities, including bridging the digital divide, fostering American leadership in 5G and empowering telehealth advancements,” said Brian Hart, director of the FCC’s office of media relations.

Should both the Senate and House flip to Democrats in next week’s midterm election, Pai’s agenda of deregulation, media consolidation, and elimination of many Obama-era consumer protections would be in peril and subject to determined Congressional oversight.

Pai has taken heat from consumer groups for ending a set-top box competition program that could have forced television providers to accept equipment obtained competitively in the retail market. He also faced criticism for reinstating a program giving UHF TV station owners the opportunity to acquire more stations, directly benefiting Sinclair and allowing it to pursue its since failed merger with Tribune Broadcasting.

Let's block ads! (Why?)

Virtual application delivery controllers can speed traffic among microservices containers in data centers

The application delivery controller (ADC) market is ripe for disruption.

The ADC sits at a strategic place in the data center, in between the firewall and application servers, where it’s able to see, route and analyze much of the inbound and outbound traffic. Traditional ADCs were sold as all-in-one hardware appliances. However, software-defined networking and virtualization have enabled more flexible deployments of ADC functionality. At the same time, the advent of multicloud environments and microservices, such as containers, are changing the makeup of enterprise data centers.

Over time, migration to a software-defined data-center network will require the disaggregation of ADC features, increasing use of microservices-based ADC features, and more flexible licensing options.

What is the software-defined data-center network?

In software-defined networking (SDN), networking software is abstracted from networking hardware, enabling significant changes in how networks are built and operated. SDN impacts both the WAN and the data center network; in the SDDCN, adaptable network resources are deployed with compute resources such as virtual machines and containers, along with enterprise disc and flash storage, to deliver specified performance for private cloud applications. Via software abstraction, data center resources can be easily reallocated to address changing application requirements without changing the underlying physical compute, storage, or network elements.

Let's block ads! (Why?)

Users tell what you need to know about SD-WAN

Harrison Lewis wasn’t looking for SD-WAN, but he’s glad he found it.

Northgate Gonzalez, which operates 40 specialty grocery stores throughout Southern California, had distributed its compute power for years. Each store individually supported applications with servers and other key infrastructure and relied on batch processing to deal with nightly backups and storage, according to Lewis, the privately held company’s CIO.

Over time, the company’s needs changed, and it began centralizing more services, including HR and buying systems, as well as Microsoft Office, in the cloud or at the company’s two data centers. With this shift came a heavier burden on the single T-1 lines running MPLS into each store and the 3G wireless backup. Complicating matters, Lewis says, rainy weather in the region would flood the wiring, taking down terrestrial-network connectivity.

“It was problematic. We even doubled up on T-1 lines to each location, but it still wasn’t enough. The network had to be a lot more reliable,” Lewis says.

Lewis searched for a suitable – and cost-effective – alternative, researching incremental options that could have increased bandwidth and addressed the company’s security needs. “They all came with a significant price tag,” he says.

In July 2016, Lewis and his team came upon software-defined wide-area networking (SD-WAN), technology that decouples the control plane from the data plane and enables networking groups to control the entire WAN in a centralized manner. Uniquely, SD-WAN supports the use of multiple types of connectivity (such as MPLS, broadband, broadband wireless), offering flexibility and ease of use for organizations with multiple locations.

Lewis thought the technology was too immature to deploy at the time, but he kept an eye on its growth and by late 2017, considered it ready for a proof of production. With the NSX SD-WAN appliance from VeloCloud (VMware acquired VeloCloud in December 2017), he, along with his carrier AT&T, created a test zone at a single store, running the SD-WAN and traditional network side by side. The SD-WAN linked to two broadband connections and 4G wireless as a backup, along with ZScaler for Internet security. He put a similar configuration in the two data centers, which soon proved a viable approach. Today, Northgate Gonzalez has deployed SD-WAN in all 40 stores, with a recent bump to 5G wireless as backup.

The move to broadband and wireless backup increased bandwidth because all three connections can be used interchangeably by SD-WAN, Lewis says. It also decreased monthly connectivity expenses by about 40%. He’s particularly proud of this result, as he is mindful of his fiduciary responsibility to not just keep throwing T-1 lines at the problem. Doing so could have led Northgate Gonzalez to have to raise prices or negatively impacted shareholders. “That just doesn’t make sense if there are alternatives,” he says.

He appreciates SD-WAN’s ability to prioritize traffic in support of business-critical activities, including payments and ordering, allowing them “to take precedence over all else,” and the “somewhat” zero-touch nature of provisioning the appliances. “It doesn’t require a great deal of skill to install the appliance,” he says, adding he leveraged store technicians and help desk members to get the preconfigured appliances up and running at each site.

SD-WAN handles diversity of circuits

Luis Castillo, senior network manager for global network engineering at National Instruments, also was drawn to SD-WAN for its ease of deployment. National Instruments, an Austin-based maker of scientific equipment and software, operates in 50 countries and needed a solution that could handle the complexity of its distributed workflow. Customer service calls and research and development are handled by teams around the world, requiring tight attention to quality of service.

“We were throwing money at QoS toolsets to get classification, packet shaping, queuing, etc. – that was the only way we could maintain a certain quality of service,” Castillo says.

Along with the cost of the toolsets, requirements for bandwidth would climb – as much as 25% or more. “We only got approved for 1% or 2% increases in our annual budget, so the gap kept getting wider,” he says. As bandwidth demands grew, the company began to bump up against issues surrounding availability and the cost of more lines into their offices. “In Russia, a 4M bit/sec [connection] cost $10,000 a month. We couldn’t pay that,” he says.

The global nature of their business also made it difficult to get a single MPLS provider to handle all locations – and some locations, such as Armenia, didn’t have MPLS.

Castillo first began looking for alternatives in 2008, and deemed “performance-based routing,” a precursor to SD-WAN, not good enough to operationalize. “Most of the efforts in those early days didn’t leave the lab,” he says.

When SD-WAN emerged, he connected with Viptela (Cisco closed its acquisition of Viptela in August 2017), and determined the software-driven technology (atop Cisco vEdge routers) to be the best bet to integrate with National Instruments’ environment, especially its diversity of circuits.

Viptela’s zero-touch provisioning was also a draw. “It saved money because we didn’t have to ship engineers around the world,” Castillo says. He drew up an implementation document and shared it with the local IT worker (in-house or contract). He acknowledges that the pre-configuration and post-configuration can be a little more difficult as you have to integrate the SD-WAN with the attached devices. [What does he mean by attached devices? Can you clarify?] “Those parts can be disruptive,” he says.

Castillo has not had to add MPLS lines and, as he says, has been able to “peel off dollars from MPLS.” But he has kept MPLS in the mix “because there are still sensitive applications where the Internet would not be good for transport, especially if you’re transmitting overseas.”

Having started deployment in mid-2017, SD-WAN technology is currently in use for 80% of the company’s 8,400 employees. Castillo looks forward to more features and functionality he expects will come once Viptela is more integrated into Cisco.

SD-WAN boosts QoS via traffic shaping

At Gerresheimer AG, which designs, develops, manufactures, and markets various glass and plastic products for pharmaceutical, biotechnology, and scientific research, Greg Taylor, manager of IT infrastructure for the Americas, came across SD-WAN as he was trying to problem-solve how to get faster and more reliable connectivity between the company’s locations – the maximum speed was 3M bit/sec T-1 links that were really slow – in a standardized way. Globally, the company has 43 sites in more than 15 countries, but Taylor was focused on his territory, which comprises six locations in U.S. and Mexico. “Our WAN was neither homogenous nor easy to manage,” he says, adding that some sites, such as India, were using site-to-site VPNs between firewalls.

Taylor, who considers himself extremely radical, wanted to junk the company’s commitment to MPLS and go all-broadband along with the deployment of SD-WAN. He knew the European-based company couldn’t – and wouldn’t – tolerate that. “We needed to go with a hybrid WAN,” he says.

The company’s proof-of-concept, and later deployment, made use of the Silver Peak Unity EdgeConnect SD-WAN edge appliances to create a hybrid WAN that uses MPLS and Broadband to deliver up to 200M bit/sec to some sites.

Similar to Castillo, Taylor crafted a 15-page manual as well as a QuickStart Guide for each site involved. “They would have to plug it in and share it, and then we could configure it from here [in New Jersey],” he says. “I wouldn’t call it zero-touch, though, because you have to prep the switches and firewalls before the EdgeConnect appliances arrive.”

He says the “special sauce” of SD-WAN is giving the company a tremendous boost in terms of quality of service through built-in traffic shaping and other optimization functionality, even with some “poorer” links still in place.

The move to SD-WAN has saved Gerresheimer $10,000 a month, mostly because MPLS has tripled in speed (Taylor effectively renegotiated his MPLS contracts to get faster speeds for less money), each site can now make some use of Internet connections, and the secondary MPLS line (which was usually lying dormant, he says) is slated to be canceled.

SD-WAN has improved the experience for users, especially those working with the manufacturing execution system and financial reporting system (Citrix). “The executives and shop floor employees overseas say they feel like they are in the U.S. now,” he says.

At 80% site deployment globally, Taylor has bigger plans for the network now that SD-WAN has proven itself, including eliminating physical firewalls in favor of using EdgeConnect to service chain application traffic with cloud-based firewall services.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

Can Colocation and Private Interconnection Simplify Digital Business Integration?

Calix Delivers AXOS Intelligent Access Edge

LAS VEGAS — Calix, a provider of broadband communications access systems, announced the AXOS Intelligent Access Edge solution, giving service providers the flexibility to bring Layer 3 functionality—including MPLS routed networking—to their new or existing Layer 2 access network, drastically simplifying their architectures and reducing total cost of ownership. The offering also includes a new line card, which enables service providers to aggregate their existing Layer 2 Optical Line Terminals (OLTs) while easily reaping the benefits of a collapsed Layer 3 Access network.

The AXOS Intelligent Access Edge Solution, which is comprised of the new Advanced Routing Protocol Module (ARm), the Routing Protocol Module (RPm), the Subscriber Management Module (SMm), as well as the E9-2 Intelligent Edge System gives service providers the option of performing routing functions in their access networks with the RPm or having a full MPLS routed network with the ARm. The new [[Aggregation Services Manager]] line card for Calix’s E9-2 Intelligent Edge System runs with the ARm, providing 64 10G ports in a single system, aggregating Layer 2 OLTs, and delivering the benefits of the Layer 3 Access network. The existing Layer 2 access network is then connected to the service provider’s edge network with multiple MPLS-capable 40G/100G uplink ports.

Consolidating Subscriber and Service Related Functions
“The AXOS Intelligent Access Edge Solution with the new Advanced Routing Protocol Module has been designed specifically to allow service providers to instantly realize the simplicity and cost advantages of software defining their access networks without undergoing a complete replacement,” said Shane Eleniak, senior VP of platforms for Calix. “The efficiency advantage from consolidating subscriber and service related functions onto the access network not only reduces operational cost but speeds time to deploy new services—both of which are key to our customers, who are tasked with delivering increasingly complex next-generation services to their subscribers.”

As subscriber demands continue to increase, not only is bandwidth critical, but with new applications like virtual/augmented reality and high-capacity gaming, minimizing latency is vital as well. To meet these demands, existing architectures necessitate more network elements, which increases complexity and makes it difficult for service providers to keep up with these subscriber demands. Simplifying the network and reducing the number of elements while consolidating subscriber and service-related functions onto the access network edge is essential to enabling service providers to move fast and reduce cost.

“Collapsing network edge functions is an exciting step in the evolution of next generation open access ecosystems,” said David Tomalin, CTO of CityFibre. “Calix’s innovation and disruptive leadership in this space will enable wholesale network operators like us to optimize operating models and accelerate the introduction of new service-based functionality. I am looking forward to exploring the potential of the new AXOS Intelligent Access Edge solution as our partnership develops.”

Let's block ads! (Why?)

Canadian Ministers Agree to National Broadband Strategy

VANCOUVER, BC — In Canada, federal, provincial and territorial ministers for innovation and economic development agreed to make broadband a priority and to develop a long-term strategy to improve access to high-speed internet services for all Canadians. The commitment to a strategy is the latest outcome of this intergovernmental table focused on driving growth and job creation through innovation.

Ministers recognize that access to high-speed internet service is critical for businesses to grow and compete and for all Canadians to fully access the goods and services available in a digital economy. Ministers agreed to work towards universal access to high-speed internet and improve access to the latest mobile wireless services along major roads and where Canadians live and work.

Smart Cities, Connected Cars and e-Health
High speed connectivity is critical to the prosperity and well being of Canadians particularly with the next-generation of high-quality networks that will especially enable smart cities, connected cars and e-health for Canadians.

At the meeting, ministers were also briefed on the report from Canada’s Economic Strategy Tables. This report identifies opportunities to create the conditions for strong, long-term competitiveness that will secure Canadians’ quality of life. Ministers agreed to consider the advice of the tables in advancing their two-year work plan in ways that will help companies to scale up and to adopt new technologies.

Ministers also discussed the promotion of indigenous economic development through partnerships among indigenous businesses, non-indigenous businesses and communities.

Let's block ads! (Why?)

NextLight Upgrades VoIP with Alianza Cloud Voice Platform

LAS VEGAS — NextLight, the community-owned fiber-optic network ISP from Longmont Power & Communications in Longmont, Colorado, has selected Alianza+6, a cloud voice platform company, for its next-generation VoIP solution. With Alianza’s turnkey software-as-a-service solution (SAS), NextLight is improving operations and its customers’ experience.

The City of Longmont launched NextLight as a municipal internet service in 2014. The network has since transformed Longmont into Colorado’s first Gig City, making symmetrical gigabit connections available citywide, and becoming the high-speed provider of choice for a majority of the city’s residents. In June, PC Magazine declared NextLight the fastest ISP in the nation. The National Civic League also cited NextLight as a factor when it named Longmont an All-America City in 2018.

Migrating to a Modern Cloud Solution
NextLight is the latest ISP to migrate from a hosted softswitch solution to a modern cloud solution. Alianza’s APIs, proven migration process, and experienced team made for an easy upgrade. Alianza’s solution supports NextLight’s existing phone services and provides a platform for growth with new business cloud communication services. NextLight has integrated the Cloud Voice Platform with GLDS’ BroadHub customer management, billing and provisioning systems and Calix GigaCenters to eliminate swivel chair processes, automate customer life-cycle management and streamline operations.

“We are constantly innovating to deliver a better customer experience,” commented NextLight’s Anne Lutz, director of customer service. “Alianza fits that mode and gives us a modern, agile, and easier to manage solution so we can deliver the best phone service portfolio for our customers, including expanded features.”

“NextLight is delivering a top-notch broadband portfolio for residents and businesses of Longmont,” commented Kevin Dundon, Alianza’s executive vice president of sales and marketing. “Our partnership with Longmont provides the ISP with feature-rich and future-proof cloud communications services and creates a powerful service combination.”

Let's block ads! (Why?)

What is a firewall? How they work and all about next-generation firewalls

Budget offers £200m boost to rural full fibre broadband

A further £200 million of government funding has been allocated to support the rollout of better broadband technology as part of Chancellor of the Exchequer Philip Hammond's latest Budget, unveiled yesterday (October 29th).

The money will go towards replacing ageing copper lines with faster fibre-optic cables, and will aim to bring full-fibre capabilities to more rural areas that have so far been underserved by the latest internet technology.

It will come from the National Productivity Infrastructure Fund and is intended to act as a stimulus to help private companies invest in full fibre networks.

The initial funds will be targeted at primary schools, with nearby homes and businesses also set to benefit through an extension of the government's existing voucher scheme.

Locations in Cornwall, Wales and the Scottish Borders will be the first parts of the UK to benefit from these investments.

In his speech, Mr Hammond stressed the importance of better networks in supporting the UK's digital economy. "We are investing in our nation’s infrastructure and backing the technologies of the future," he said.

The government's ultimate aim is to completely replace the UK's copper network with fibre by 2033, and while this week's announcement will only go a small way to achieving this goal  - with the total cost of the scheme estimated to be as much as £30 billion - the funding has been welcomed as a positive first step.

The Financial Times reported that Jeremy Chelot, Chief Executive of urban broadband company Community Fibre, praised the "decisive steps" the government has taken toward the rollout of full-fibre lines, while Paul Stobart, Chief Executive of broadband provider Zen, said rural connectivity remained the “missing part of the puzzle" for improving the UK's digital capabilities.

However, others have called on the government to do more to support the deployment of full fibre networks, with Openreach describing the Budget as a missed opportunity to stimulate investment in fibre lines by reducing business rates for the technology.

Let's block ads! (Why?)

BrandPost: Extending Network Capacity in Enterprise WLANs with 802.11ax

The recent right-to-repair smartphone ruling will also affect farm and industrial equipment

Last week, the tech press made a big deal out of a ruling by the Librarian of Congress and the U.S. Copyright Office to allow consumers to break vendors’ digital rights management (DRM) schemes in order to fix their own smartphones and digital voice assistants. According to The Washington Post, for example, the ruling — which goes into effect Oct. 28 — was a big win for consumer right-to-repair advocates. 

Big news for vehicles

That promises to save millions of consumers some coin, but it may have a far bigger impact on a much smaller cohort: farmers, construction companies, fleet managers and other companies that now have legal permission to fix their motorized land vehicles (cars, tractors, and so on — owners of boats and airplanes are still out of luck).

As I noted in a Network World post earlier this year, many farmers have been struggling to get full benefits of the IoT technology built into their tractors because of restrictions written into their end-user license agreements (EULA). That can limit their use of the information generated about their own farming practices and also keep them from repairing their own equipment without calling in the vendor, as noted by Motherboard. The most publicized issue comes from John Deere, which argued that letting farmers fix their own equipment could lead to music piracy (no, not a joke).

Also on Network World: Big trouble down on the IoT farm

The new ruling may not give farmers ownership of their farming data, but at least they now have the right to ignore the DRMs and fix their own machines — or to hire independent repair services to do the job — instead of paying “dealer prices” to the vendors’ own repair crews.

Per Motherboard, the new ruling “allows breaking digital rights management (DRM) and embedded software locks for ‘the maintenance of a device or system … in order to make it work in accordance with its original specifications’ or for ‘the repair of a device or system … to a state of working in accordance with its original specifications.’”

Only a partial victory in the war on DRM

From my perspective, this is indeed a win, but far from a complete victory. Farmers still aren’t allowed to hack into their own tractors to turn them into drag racers (that might be fun to watch!), but at least they can do whatever they need to do in order to make sure the machines aren’t falling down on the job.

That’s a good start, but it still doesn’t make sense to restrict legitimate device and equipment owners from making full use of the products they’ve paid for. As software and IoT capabilities worm their way into just about everything, antiquated rules like this — based on the widely reviled Digital Millennium Copyright Act of 1998 (DMCA) — reduce the value and utility of the products they encumber, sow (often justified) distrust of the vendors, and threaten to hobble the growth of the IoT market, all in service of maximizing short-term service revenue for a handful of vendors. DRM advocates — yes, they exist — claim that it helps stop IP theft and avoid compromising the equipment. But given silly arguments like John Deere’s music-piracy shenanigans, that logic can strain credulity.

Legal, but not necessarily possible

Perhaps even more important in the real world, this ruling doesn’t necessarily make getting around DRM restrictions easy, or even possible. Many vendors work very hard to make it as difficult as possible for anyone but authorized repair services to work on their products. The ruling just makes it legal to try to hack through DRMs in order to conduct repairs — it doesn’t guarantee that effort will be successful.

This approach is dangerously shortsighted. Here’s hoping that this Copyright Office ruling sparks a major rethink of the rights and obligations of equipment buyers and sellers and results in a legal framework that promotes innovation and industriousness, not restrictions and control. But I’m not holding my breath.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

Plusnet offers limited-time cashback on broadband deals

New customers signing up to Plusnet broadband packages can take advantage of a cashback offer that makes the firm one of the cheapest providers in the UK, but you will have to hurry, as the deal ends today (October 30th).

The broadband provider is offering £65 cashback on its Fixed Price Unlimited Broadband and Phone Line package. This comes with a 12-month contract and offers consumers average download speeds of 10Mbps, as well as line rental.

It will cost £18.99 a month for the contract, but with the £65 cashback factored in, this reduces the total cost of the package from £228 to £163; working out to the equivalent of just £13.58 a month.

If you want to add unlimited UK mobile and landline calls to the package, you can do this for an extra £8 a month, bringing the total cost to £26.99 a month, while free evening and weekend calls will cost £22.99 a month. Both of these deals are also included in the cashback offer.

There are also no setup costs to pay and, as Plusnet guarantees the price for the length of the contract, you don't have to worry about any unexpected price increases.

If your internet needs are a bit more demanding, Plusnet is also offering £50 cashback on its Fixed Price Unlimited Fibre Extra deal with evening and weekend calls included. This provides users with average download speeds of 66Mbps - ideal for gaming or high-definition streaming - and will cost £28.99 a month for an 18-month contract.

All these offers end at midnight tonight, so you'll have to act quickly to take advantage.

Let's block ads! (Why?)

It’s All About Business…

A few years ago I got cornered by an enthusiastic academic praising the beauties of his cryptography-based system that would (after replacing the whole Internet) solve all the supposed woes we’re facing with BGP today.

His ideas were technically sound, but probably won’t ever see widespread adoption – it doesn’t matter if you have great ideas if there’s not enough motivation to implementing them (The Myths of Innovation is a mandatory reading if you’re interested in these topics).

Here’s a pretty useful filter you can use when someone tries to tell you he solved a really hard problem:

  • Find out all the prior proposed solutions (if the problem is worth solving, someone else probably tried to solve it before);
  • Figure out whether the other solutions failed due to technical reasons (in which case there might be hope);
  • If the prior solutions were technically feasible but weren’t accepted, there might be a business reason for that;
  • If the proposed solution sufficiently changes the business model, there might be hope. Otherwise, move on.

Coming back to BGP example:

  • We had RPKI for years. Uptake is minimal;
  • BGPsec was also developed years ago. Nobody even thinks about using it due to the additional compute overload it would create;
  • There are tools to generate prefix lists from public routing databases. A very small percentage of ISPs cares enough about the quality of Internet routing to use them… in many cases due to someone passionate about quality like Job Snijders.

In case you’re wondering what’s wrong with the BGP world, Russ White (among numerous other things one of many ipSpace.net webinar authors and member of ExpertExpress team) nicely explained it in BGP Security: A Gentle Reminder that Networking Is Business. Have fun!

Let's block ads! (Why?)

Monday, October 29, 2018

IBM-Red Hat deal: What the companies say

IBM announced yesterday that it is buying Red Hat for $34 billion, making it IBM's largest deal to date and the third largest in the history in the US tech industry.

After announcing the plan to close the deal sometime in the second half of next year, executives from the two companies held a joint conference call fleshing out the details. Here's what they had to say.

According to Arvind Krishna, Senior Vice President of Hybrid Cloud at IBM, this move represents a "game changer" that will redefine the cloud market. Krishna was joined by Paul Cormier, Executive Vice President and President of Products and Technologies at Red Hat.

As a result of the acquisition, Red Hat will become a unit of IBM's Hybrid Cloud division while Red Hat CEO Jim Whitehurst joins IBM's senior management team, reporting directly to Chairman, President and CEO Ginni Rometty.

According to Rometty, "IBM will become the world’s Number 1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses." Krishna said Red Hat will be important to helping IBM reach that goal.

In some ways, the acquisition of Red Hat by IBM is not a complete surprise. Last year the companies announced a partnership to insure that Red Hat's OpenStack private cloud platform and its Ceph Storage would run in IBM's cloud and be managed by the same tools they use to manage their on-premises deployments.

Both companies have also been notable players in the open-source market. Some of their joint accomplishments involve creating some of the fastest supercomputers in use today, built on IBM systems and running Red Hat Linux to address some of the world's largest scale problems.

Red Hat distributes and supports Red Hat Enterprise Linux and sponsors Fedora (free for anyone to use, modify and distribute) as well as other technologies used in data

They companies said Red Hat will continue operating as a separate brand for the foressable future. IBM sees no reasons for company locations to change. At a recent IBM all-company meeting Rometty affirmed that the Red Hat culture would be maintained.

IBM said it remains committed to open source and to its participation in the open source community. Both IBM and Red Hat will continue to support open-source efforts such as Patent Promise, GPL Cooperation Commitment, the Open Invention Network and the LOT Network, they say.

Cormier said that Red Hat will continue its work with its partners. CoreOS is now part of Red Hat, and the company is always tweaking its technology to make it work better and serve customers and partners, he said, and that won't change. He said he anticipates no changes in the basic Red Hat roadmaps because they were built around what customers ask for.

Krishna said IBM will is to promote Red Hat Linux and OpenShift as standard technologies while also supporting other cloud technologies, including public clouds.

IBM's acquisition of Red Hat will cost the company nearly a third of its value.

More details are available in the press release.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

The IBM-Red Hat deal: what it means for enterprises

IBM has made a bold move to blast its way into the enterprise multicloud world by saying it would buy open source software pioneer Red Hat in a $34 billion stock acquisition.

For IBM the deal could mean many things. It makes it a bigger open source and enterprise software player for example, but mostly it gets Big Blue into the lucrative hybrid-cloud party targeting its towering competitors, Google, Amazon and Microsoft among others. Gartner says that market will be worth $240 billion by next year.

“IBM, with this deal, is hoping to target enterprise customers who are still in the early stages of their cloud migration journey," said Dennis Gaughan, chief of research, applications at Gartner. "While a lot of people have already invested in public cloud services, there are still a lot of existing applications that will need to be modernized for them to run effectively in the public cloud,” .

“IBM is hoping that the combination with Red Hat gives them a richer portfolio of tools and services to help clients with that modernization. All the major public cloud providers see this opportunity as well and will also continue to target those same enterprises. This deal doesn’t change that.”

Indeed, while IBM has struggled to keep up with Amazon Web Services, Microsoft and Google in the public-cloud market, this deal gives IBM a new stronghold in the cloud development platforms market, said Dave Bartoletti, Vice President and Principal Analyst with Forrester in a statement.

“The combined company has a leading Kubernetes and container-based cloud-native development platform, and a much broader open-source-middleware and developer-tools portfolio than either company separately," Bartoletti wrote. "While any acquisition of this size will take time to play out, the combined company will be sure to reshape the open-source and cloud-platforms market for years to come.” 

Others noted that IBM is also interested in Red Hat’s Kubernetes-based OpenShift Container Platform, which lets customers deploy and manage containers on their choice of infrastructure.

“Although most think of Linux when they think of Red Hat, the company has been smartly building out OpenShift into a functional platform for developers building containerized and orchestrated ([for example] Kubernetes or k8s) cloud applications,” wrote Glenn Solomon a managing partner with venture capital firm GGV Capital in a blog about the deal.

“IBM wants and really needs access to the developer community around OpenShift, hence no price was too high," Solomon write. "Despite the fact that Red Hat’s traditional Linux business has been faltering – they missed earnings badly earlier this year – that didn’t matter to IBM because the value of OpenShift to them is so high.”

Another valuable component of Red Hat for IBM is its popular Red Hat Enterprise Linux (RHEL) operating system that runs on desktops, on servers, in hypervisors or in the cloud. Red Hat has over time bolstered RHEL with cloud and increased security features.

“IBM needs the broad engagement that RHEL brings, but they need a broad strategy, too, and I am not sure IBM gets that,” said Tom Nolle, president CIMI Corp.

On the Red Hat side, the benefits seem to be the cachet and increased power IBM will bring it.

Red Hat president and CEO Jim Whitehurst wrote to Red Hat employees about the acquisition, saying that IBM company can dramatically scale and accelerate what Red Hat is doing today.

“Imagine Red Hat with the ability to invest even more and faster to accelerate open-source innovation in emerging areas. Imagine Red Hat reaching all corners of the world, with even deeper customer and partner relationships than we have today. Imagine us helping even more customers benefit from the choice and flexibility afforded by hybrid and multi-cloud. Joining forces with IBM offers all of that, years ahead of when we could have achieved it alone,” Whitehead said.

“Open source is the future of enterprise IT. We believe our total addressable market to be $73 billion by 2021. If software is eating the world – and with digital transformation occurring across industries, it truly is – open source is the key ingredient,” Whitehead said.

In a joint press conference after the merger announcement IBM and Red Hat executives were adamant that the buy would not affect the vaunted culture at Red Hat and that Red Hat would remain an independent, neutral part of Big Blue. 

Red Hat has to support many other public clouds and partners and communities in order to maintain the value that IBM sees in Red hat, said Paul Cormier, executive vice president and president of products and technologies for Red Hat. “The unique culture that Red Hat has developed has to be maintained.”

But bringing two different company cultures together is always hard, and retaining key execs/leadership from the acquired company, Gartner’s Gaughan said. “Maintaining Red Hat partnerships with existing competitors while articulating a value proposition of why the IBM/Red Hat combination is better will also be a balancing act.”

IBM said Red Hat will be part of its hybrid-cloud team but as a distinct unit, “preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio and go-to-market strategy, and unique development culture.” Red Hat’s management team, steered by Whitehurst, will remain and Whitehurst will report to IBM CEO Ginny Rometty. IBM said Red Hat will continue to grow its partnerships with AWS, Azure, Google and Alibaba.

In the end Gaughan said there is still much to be sorted out, and there are a lot of things that IBM/Red Hat can’t do until the acquisition closes sometime in the second half of 2019. "[Customers] should be talking to the vendors, understand the potential impact of the deal, getting clarity on roadmaps when they become available and keeping up to date on any potential changes to contracting/licensing as a result of the acquisition closing.”

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

AT&T Pulls Plug on Turner Classic Movies/Warner Bros.’ FilmStruck Streaming Service

What began two years ago as the paid streaming service of Turner Classic Movies and grew into a highly respected catalog of 1,800 critically acclaimed films, has been axed by its new owner, AT&T, because it failed to attract a mainstream audience of subscribers.

FilmStruck was the five-star version of Netflix, a niche-streaming service that curated a vast collection of art house, international, independent, classic, and cult favorite films that was perceived by many critics as an essential treasure trove of films. Among the titles: “Casablanca,” “Rebel Without a Cause,” “Singin’ In the Rain,” “Citizen Kane,” “The Music Man,” “Bringing Up Baby,” “The Thin Man” and “Who’s Afraid Of Virginia Woolf?” from Warner Bros. Other classics included: “Babette’s Feast,” “Blow Out,” “Boyhood,” “Breaker Morant,” “Chicago,” “A Hard Day’s Night,” “My Life as a Dog,” “Our Song,” “The Player,” “A Room with a View,” “Seven Samurai,” “The Seventh Seal,” “Thelma & Louise,” “The Times of Harvey Milk” and “The Umbrellas of Cherbourg.”

Last Friday morning, subscribers received a disappointing email:

We regret to inform you that effective November 29, 2018, FilmStruck will be shutting down. As a subscriber currently on a monthly plan, effective immediately you will no longer be billed for FilmStruck and will continue to have access to the service until November 29.

We would like to take this opportunity to thank you for being a FilmStruck subscriber. It has been our pleasure sharing the best of indie, art house, and classic Hollywood with you. FilmStruck was truly a labor of love, and in a world with an abundance of entertainment options – THANK YOU for choosing us.

If you have any questions please visit our FAQs or email the FilmStruck customer service team at [email protected] You can also manage your account by clicking here.

Thank You,
The FilmStruck Team

Customers were informed all levels of the FilmStruck service were being discontinued. This included the FilmStruck Only Monthly package, the FilmStruck+Criterion Monthly package, the FilmStruck+Criterion Annual package, and the FilmStruck+Criterion Student package. Customers on annual plans will receive a pro-rated refund of the remaining term of their subscription.

“We’re incredibly proud of the creativity and innovations produced by the talented and dedicated teams who worked on FilmStruck over the past two years. While FilmStruck has a very loyal fanbase, it remains largely a niche service,” AT&T’s Turner and WB Digital Networks said in a statement. “We plan to take key learnings from FilmStruck to help shape future business decisions in the direct-to-consumer space and redirect this investment back into our collective portfolios.”

In fact, AT&T has been pressing hard to wring cost savings out of its multi-billion dollar acquisition of Time Warner, Inc., now known as WarnerMedia. AT&T has sent a clear message it will not be in the niche content business.

On Oct. 16, AT&T ordered the closure of DramaFever, a Warner Bros. subscription video on demand service offering Korean dramas.

Last week, the company also announced it was pulling the plug on Super Deluxe, a Millennial-targeted content producer that claimed to reach 52 million 18-34 year olds, with 165 million views across Facebook, Twitter, YouTube, and Instagram. AT&T called Super Deluxe “duplicative.”

The first sign that AT&T intended to be aggressive about changing Time Warner’s media businesses took place at a June “town hall” meeting with new HBO executive John Stankey, when employees learned AT&T was impatient with its newly acquired premium movie channel. Stankey made clear AT&T was gunning to push for major changes at HBO, pulling away from a decades-old format showing movies repeated a dozen times a month and a handful of very expensive, but award-winning TV series. The future of AT&T’s HBO will be closer to a Netflix competitor, releasing enough new shows and original content to attract mainstream audience viewing for several hours a day. To make this possible, AT&T will have to provide HBO with a bigger budget, funded in part from money transferred away from services like FilmStruck.

AT&T sources have leaked word that the company will eliminate most of Time Warner/WarnerMedia’s projects that are not proven major producers of revenue to fulfill promises to Wall Street that AT&T’s acquisition will eventually result in substantial financial growth for investors.

The Criterion Collection

FilmStruck subscribers and some Hollywood critics feel blindsided by the decision to shutter the streaming service. New Yorker author Richard Brody minced no words — “Nothing but contempt for the Mr. Potters who are shutting down FilmStruck,” Brody wrote in a scathing opinion piece.

Among consumers, disgust over the decision ranged from shock from Turner Classic Movies fans over AT&T’s decision to shut down a service showing movies not mired in violence, sex, and immorality, to film buffs disillusioned with Hollywood’s mainstream commercial films left furious over the abandonment of a central depository of independent, high quality cinema.

AT&T believes the next generation of streaming will be much less about putting together collections of old movies or producing niche, low-budget movies and series to appeal to the Netflix crowd. Instead, the next war will be fought over high-budget, high-expectation mainstream movies and series streamed on premium subscription services. WarnerMedia will launch its own paid streaming service in 2019. It will face similar paid services launching late next year from some of America’s largest media conglomerates, including Disney. Among Disney’s planned original productions for its forthcoming service, new episodes of the animated Star Wars: The Clone Wars, a new, premium budget live-action Star Wars series from Jungle Book and Iron Man director Jon Favreau, and shows based on both the High School Musical and Monsters, Inc. franchises.

Let's block ads! (Why?)

Understanding mass data fragmentation

The digital transformation era is upon us, and it’s changing the business landscape faster than ever. I’ve seen numerous studies that show that digital companies are more profitable and have more share in their respective markets. Businesses that master being digital will be able to sustain market leadership, and those that can’t will struggle to survive; many will go away. This is why digital transformation is now a top initiative for every business and IT leader. A recent ZK Research study found that a whopping 89% of organizations now have at least one digital initiative under way, showing the level of interest across all industry verticals.

Digital success lies in the quality of data

The path to becoming a digital company requires more than a CIO snapping his fingers and declaring their organization digital. Success lies in being able to find the key insights from the massive amounts of data that businesses have today. This requires machine learning–driven analytics, and there has been a significant amount of media focus on that topic. The other half of the equation is data. Machine learning alone doesn’t do anything. It needs to analyze data, and as the old axiom goes, good data leads to good insights, and bad data leads to bad insights.

Mass data fragmentation hinders digital initiatives

For most companies, data isn’t the fuel that powers digital transformation — it’s the biggest obstacle because of something I’m calling mass data fragmentation (MDF), which is a technical way of saying that data is currently scattered all over the place and unstructured, leading to an incomplete view of data. Data is fragmented across silos, within silos and across locations. Adding to the problem is that most companies have multiple copies of the same data. Some data managers have told me that about two-thirds of their secondary storage is comprised of copies, but no one knows which copies can be kept or deleted, forcing them to keep everything. If bad data leads to bad insights, then fragmented data will lead to fragmented insights, which can lead to bad business decisions.  

Digital natives such as Amazon and Google are data-centric and architected their infrastructure to avoid the MDF issue. This is why those businesses are agile, nimble and always seem to be at the forefront of market transitions. They have access to a larger set of quality data and are able to gain insights that other companies can’t.

Many factors contribute to mass data fragmentation

 The majority of companies were born in an era when data was viewed not as a competitive asset but rather as a necessary evil. For most companies, the mere mention of data evokes images of high-priced storage systems, ineffective backups, a complex management problem and a source of risk that could cripple the company. To solve the MDF problem, it’s important to understand how we got here. Below are the main factors that have contributed to MDF.

  • Data has exploded. Data growth continues at an exponential rate. Ninety percent of all the data ever generated has been created in the past five years. Video, IoT, messaging and the cloud will only exacerbate the problem. The legacy mindset of “keep everything forever” is no longer viable.
  • Most data is unstructured. Most organizations have far more data than they are aware of. The typical storage manager knows how much data is resident on centralized storage systems, but that’s just a fraction of what exists. The average enterprise likely has millions of gigabytes of information stored on ad hoc storage systems in dozens or perhaps hundreds of locations. Then there’s the cloud, which includes the corporate sanctioned public services as well as hundreds of consumer-grade ones that workers use. IT’s ability to secure, control and use all the data no longer exists.
  • Data is dark. Even if IT managers actually knew where all their data was, it’s unlikely that they would know the contents. This includes information such as personally identifiable information, who the owner is, when it was accessed last and by whom. Data is essentially a black hole, making it a nearly impossible task to manage and to meet increasingly stringent compliance requirements.
  • Secondary storage dominates. Often about 80% of an organization’s data falls into the secondary storage bucket. This includes data that is stored in backups, archives, file shares, object stores, data warehouses and public clouds. Secondary storage is primarily used for where the usage is infrequent instead of actively contributing to the overall data set of the company. This means any insights captured in the secondary storage will likely never be discovered.
  • Data is managed by legacy infrastructure. The IT industry is currently in an unparalleled time of innovation. Containers, flash storage, the cloud, mobile improvements and software-defined infrastructure have made infrastructure highly agile and brought it into alignment with digital trends. However, secondary storage has stood still for the better part of three decades. Most organizations use a mix of siloed and outdated point products that were built for one specific function, such as backups or file shares.  

MDF is a serious enough problem that it is now impairing an organization’s ability to compete in the digital era. It’s time for a major shift in the storage industry — not a few incremental improvements, but rather a complete rethink of managing data that addresses the many problems associated with MDF. This requires a fresh approach to data management but that’s the subject of another post. 

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

Free wine on offer with latest Virgin Media sale

A new sale from Virgin Media could be great news for wine lovers, as the company is offering 16 bottles free when you sign up to certain TV and broadband packages for a limited time.

The flash sale, which is available until Wednesday (October 31st) will allow customers to choose a free 16-bottle selection from Virgin Wines, normally priced at £210. For non-drinkers, you can alternatively claim £100 of bill credit.

You can claim your free bottles on any of the following Virgin Media packages; Mix TV, Full House TV; Full House Movies TV; Full House Sports TV; and VIP TV.

All these packages come with a 12-month contract and a one-off setup fee of £20 and include a Virgin Media V6 set top box, Hub 3.0 wireless router and Virgin Phone plan featuring free weekend calls to UK landlines.

The Mix TV deal includes more than 150 TV channels and fibre broadband with average download speeds of 213Mbps - boosted from Virgin's previous 108Mbps speeds - and will cost £47 a month for the first 12 months.

Meanwhile the Full House package also offers boosted download speeds of 213Mbps, and comes with more than 230 channels, including BT Sport, Sky One, Fox and Discovery in high-definition. This will set users back £57 per month.

Users can alternatively add Sky Cinema with the Full House Movies deal for £67 per month, or the full range of Sky Sports channels for £77 per month as part of the Full House Sports deal.

Finally, people who want the full works can get the VIP TV package, which features more than 260 channels - including all the above movie and sports channels - for £89 per month for the first 12 months. This also comes with even faster broadband, with average download speeds of 362Mbps.

The free case of wine features a selection consisting of seven red wines from around the world, seven whites, one rosé and one bottle of bubbly, and will be sent 28 days after your Virgin Media installation.

Let's block ads! (Why?)

Virtual application delivery controller: Where it fits into the software-defined data-center network

The application delivery controller (ADC) market is ripe for disruption.

The ADC sits at a strategic place in the data center, in between the firewall and application servers, where it’s able to see, route and analyze much of the inbound and outbound traffic. Traditional ADCs were sold as all-in-one hardware appliances. However, software-defined networking and virtualization have enabled more flexible deployments of ADC functionality. At the same time, the advent of multicloud environments and microservices, such as containers, are changing the makeup of enterprise data centers.

Over time, migration to a software-defined data-center network will require the disaggregation of ADC features, increasing use of microservices-based ADC features, and more flexible licensing options.

What is the software-defined data-center network?

In software-defined networking (SDN), networking software is abstracted from networking hardware, enabling significant changes in how networks are built and operated. SDN impacts both the WAN and the data center network; in the SDDCN, adaptable network resources are deployed with compute resources such as virtual machines and containers, along with enterprise disc and flash storage, to deliver specified performance for private cloud applications. Via software abstraction, data center resources can be easily reallocated to address changing application requirements without changing the underlying physical compute, storage, or network elements.

Let's block ads! (Why?)

IBM says buying Red Hat makes it the biggest in hybrid cloud

In a move that IBM says will make it the world’s leader in hybrid cloud, the company says it’s going to buy open-source giant Red Hat for $34 billion, banking on what it sees as Red Hat’s potential to become the operating system of choice for cloud providers.

IBM says it expects growth in the use of cloud services to blossom in the coming years, with enterprises poised to expand from using cloud for inexpensive compute power to placing more applications in the cloud.

“To accomplish this, businesses need an open, hybrid cloud approach to developing, running and deploying applications in a multi-cloud environment,” IBM says in a written statement.

Part of the hesitancy to move more workloads to the cloud are concerns about security and the difficulty of moving these workloads from cloud to cloud, the company says, and the acquisition will position IBM to address both.

IBM and Red Hat have partnered for more than a  year to integrate their cloud offerings: Red Hat’s Open Stack private cloud platform and its Ceph Storage and IBM’s public cloud. The goal was to lure Red Hat customers by enabling use of Red Hat’s management for workloads placed in the IBM cloud.

All IBM customers employ cloud in some way, says Arvind Krishna, Senior Vice President, IBM Hybrid Cloud in a written statement. “But they are only 20% or so into their cloud journey,” he says. “They are eager to move their business applications to the cloud, the next chapter of the cloud, the next 80%.”

In addition to hybrid cloud support, the pending merger will provide corporate developers with container and Kubernetes technologies essential to their work writing cloud-native applications, he says. 

In addition to helping businesses directly in their move to the cloud, IBM has its eye on the use of Red Hat as an operating system around which many cloud providers build their infrastructure. This will support businesses that are building hybrid and multi-cloud infrastructure of their own to more readily move services and data from cloud to cloud, IBM says.

“They need open cloud technologies, that enable them to move applications and data across multiple clouds – easily and securely,” the company says in a statement. “IBM and Red Hat will be strongly positioned to address this issue and accelerate hybrid multi-cloud adoption.”

IBM is calling its agreement to buy Red Had its largest acquisition that will help it expand its portfolio to boost and the size of its $19 billion cloud business and tap into what it sees as a $1 trillion market.

After the acquisition, Red Hat will become a free-standing unit of IBM that will keep its leadership, facilities and culture, IBM says.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)

O2 to extend 4G mobile broadband availability to rural regions

Mobile operator O2 has unveiled a new plan to extend its 4G coverage to more rural parts of the UK, which could be good news for areas where people still find it difficult to access a high-speed broadband connection.

In many rural parts of the UK, using a 4G mobile broadband solution can actually offer faster and more reliable speeds that fixed-line alternatives, so the news that O2 is bringing this connectivity to more locations may be warmly received.

The company has announced it is committing to adding 4G capabilities in 339 rural communities up and down the UK by the end of 2018, from Drumoak in Aberdeenshire to Lizard in Cornwall. Some 250,000 residents in these locations are expected to benefit from the investment.

Derek McManus, Chief Operating Officer at Telefonica UK, commented: "We know mobile has the power to make a real, positive difference to people's lives and businesses in rural communities across Britain. That's why we're proud to be investing in 4G connectivity for more than 330 rural areas by the end of this year."

According to research commissioned by O2 and conducted by Development Economics, the implementation of 4G connectivity will also benefit more than 14,000 rural businesses, helping boost their collective revenue by as much as £141 million a year.

The tourism and hospitality sector is expected to be the number one beneficiary of this, as businesses will be able to use better connectivity to attract more customers and save time and money. Meanwhile, the transport and manufacturing sectors were also named as among the key areas set to enjoy revenue boosts.

Digital Minister Margot James also welcomed the news, noting that while 4G mobile broadband coverage is improving all the time, there is still work to be done to be done, especially in remote parts of the country.

"We've already reformed planning laws to make it easier and cheaper to install and upgrade digital infrastructure, and it’s great to see O2 and the rest of industry responding to ensure more people in rural Britain can share the brilliant benefits of 4G connectivity," she said.

Let's block ads! (Why?)

Equinix and Oracle® Showcase Strategy to Optimize Cloud Performance for Digital Business at Oracle OpenWorld

Regulating the IoT: A conversation with Bruce Schneier | Salted Hash Ep 49

[unable to retrieve full-text content]

thumbnail-100776512-orig.jpg

Security expert and author Bruce Schneier talks with senior writer J.M. Porup about that widespread use of connected chips -- allowing hackers to access cars, refrigerators, toys and soon, even more home consumer items.

Observability Is the New Black

In early October I had a chat with Dinesh Dutt discussing the outline of the webinar he’ll do in November. A few days later Fastly published a blog post on almost exactly the same topic. Coincidence? Probably… but it does seem like observability is the next emerging buzzword, and Dinesh will try to put it into perspective answering these questions:

  • What exactly is observability?
  • How is it different from monitoring?
  • Why does it matter?
  • When could modern technologies reduce observability, and what can you do to improve it?

At the end of the webinar, Dinesh plans to introduce a new open source observability platform he’s been working on in the last months.

Here’s what you have to do if you want to join the live session:

Let's block ads! (Why?)

Friday, October 26, 2018

Aricent‘s Virtualized Access Solution Enables Open OLT Platforms

SANTA CLARA, CA — Aricent, a global design and engineering company, announced the support of the Aricent SDvAS Virtual OLT framework (SDN-enabled virtualized Access Solution for Virtual OLT) on Broadcom’s leading PON OLT silicon based platforms. Aricent SDvAS enables equipment vendors to offer the agility and cost benefits of SDN and NFV, with support for disaggregated architectures using industry standard Original Device Manufacturer (ODM) platforms such as Edgecore Open OLT platforms and OEM platforms based on Broadcom PON OLT, Broadcom StrataDNX and StrataXGS Switch Silicon devices.

Aricent SDvAS Virtual OLT software framework supports disaggregated OLT architectures based on the ONF R-CORD model. Leveraging the ONF R-CORD VOLTHA framework and OpenOLT drivers, Aricent SDvAS enables flexible OLT deployments as an SDN controller application, as a VNF on x86 compute node, or as a local control plane on the OLT platform. With support for the Broadcom PON OLT MAC family and StrataDNX switch family, Aricent SDvAS framework on the Edgecore ASXvOLT16 Open OLT platform and the ASGvOLT32 and ASGvOLT64 GPON OLT platforms enables XGS-PON and GPON OLT deployments.

Aricent’s SDvAS is built on the Aricent ISS control plane that has enabled NEPs to build DSLAMs, OLTs, Carrier Ethernet Aggregation, Service Edge, Mobile Backhaul and Packet Optical Transport equipment. Available with flexible source code based licensing models, Aricent SDvAS portfolio of software frameworks enables support for SD-WAN Universal CPEs (uCPE), Virtualized OLTs (vOLT), Aricent Fast Path Accelerator (FPA) based Virtual Router (vRouter), Virtual CPE (vCPE), Virtualized BNGs (vBNG), and the SDN applications for network service provisioning and assurance. SDvAS enables flexible and scalable deployment options with support for industry standard ODM platforms, VNFs deployable as VMs and Containers on x86 compute nodes with hardware acceleration offload support, and VNF life cycle management and orchestration by Aricent MANO.

Let's block ads! (Why?)

Huawei Launches Flex-PON 2.0 for Smooth Evolution to XG(S)-PON Networks

BERLIN — Huawei, a global information and communications technology (ICT) solutions provider, launched Flex-PON 2.0, an edge tool for the Intent-Driven FTTx solution. By replacing the PON optical module, Flex-PON 2.0 enables the same service board to be compatible with six PON technologies, fully reusing OLT resources on live networks. This technology helps operators evolve from GPON to XG(S)-PON, saving about 20 percent CAPEX and OPEX, and greatly improving network bearer capability and device utilization efficiency.

With the explosive growth of user requirements, there is a consensus on the importance of developing premium broadband within the global broadband industry. Achieving smooth evolution to XG(S)-PON and satisfying user expectations, while protecting existing investment, and simplifying engineering reconstruction pose big challenges for operators.

Currently, two reconstruction solutions are available for the upgrade of OLT devices from GPON to XG(S)-PON.

  • The first solution is to add XG(S)-PON service boards and WDM multiplexer devices. This requires a large amount of equipment room space, brings optical line insertion loss, and involves a heavy engineering reconstruction workload.
  • The second solution is to replace GPON service boards with GPON/XG(S)-PON Combo service boards which built-in GPON, XG(S)-PON, and WDM. This solution is preferred because it only requires the replacement of service boards. However, it still wastes existing GPON service boards, and there is still significant workload involved in replacing the boards and engineering reconstruction.

To meet the requirements of ultra-broadband network development, Huawei offers Flex-PON 2.0 based on the MA5800, a next-generation large-capacity distributed smart OLT platform. Compared with Flex-PON 1.0 that has already been put into commercial use, Flex-PON 2.0 enables one service board to support a maximum of six PON modes, such as GPON, XG(S)-PON, and GPON/XG(S)-PON Combo mode. Flex-PON 2.0 enables smooth evolution from GPON to XG(S)-PON by replacing the optical module, and provides high-power modules to achieve super-long-distance coverage that is 10 km higher than the current maximum coverage distance in the industry. By fully reusing OLT service boards without changing the ODN network, Flex-PON 2.0 simplifies engineering reconstruction and saves equipment room space. It can achieve GPON and XG(S)-PON access on a single optical fiber, resolving difficulties in network technology selection, and facilitating the fast deployment of gigabit networks.

Let's block ads! (Why?)

Quantenna and Cortina Provide Solutions for Wi-Fi 6 Enabled GPON Gateways & Routers

BERLIN — Quantenna Communications and Cortina Access are delivering complete reference designs for next generation GPON gateways and managed routers with up to 10 Gbps performance, exploiting leading edge Wi-Fi 6 (802.11ax) capabilities. These designs utilize Quantenna’s latest Wi-Fi 6 solutions, QSR10GU-AX, QSR5GU-AX PLUS and QSR10GU-AX PLUS, along with Cortina’s Saturn CA8279 AnyPON gateway chipset and GoldenGate CA7742 quad-A53 gateway network processor. The collection of these unique capabilities provides an end-to-end solution covering a variety of applications such as fiber gateways supporting 7 different PON modes or managed routers capable of handling up to 10 Gbps throughput. In order to minimize time to market for OEMs and ODMs, the complete designs come with the hardware framework as well as full software stack.

With the explosion in the number of new Wi-Fi devices, and demanding applications such as online gaming and 4K video streaming, there is an increasing need for higher bandwidth and superior network performance. Quantenna’s advanced Wi-Fi 6 products address this demand and solve the existing challenges in crowded environments by providing enhanced DFS, extended range and superior MU-MIMO capabilities. The unique features of these products include:

QSR10GU-AX PLUS (8×8 + 4×4 11ax)

  • Dynamic switching between 8×8 MIMO and dual 4×4 MIMO in the 5 GHz band, providing end users with the best possible MIMO configuration, tailored to each individual environment
  • MU-MIMO (multi-user MIMO), enabling simultaneous transmission to multiple high-throughput clients

QSR5GU-AX PLUS (5×5 + 4×4 11ax)

  • The addition of a fifth chain allows for up to 50 percent more speed, especially in MU-MIMO operation
  • Enhanced radar detection and spectrum analyzer

Cortina CA7742 (Gateway Network Processor)

  • Quad-A53 with multiple multiplexed SerDes for multi-channels PCI-e’s, USB 2.0/3.0, SATA, & all the necessary peripherals for a comprehensive CPE device
  • Embedded Packet Encapsulation/Decapsulation Engine for Efficient Packet Processing, e.g., LAN-Wifi bridging, MAP-E/T acceleration, etc.
  • Embedded CPU-neutral networking Engine enabling wire-rate 10Gb/s WAN LAN routing
  • Embedded Multi-Gb/s crypto-engine together with hacker-proof antifuse OTP and memory scrambler for best-in-class security platform

Cortina CA8279 (PON Gateway Processor)

  • The addition of PON interfaces to the CA7742 Gateway Processor to support 10G EPON, 10G XGSPON, and 10G NGPON2 for both Cable and Telco Operators
  • Integrated Multi-rate SerDes to support direct connection to PON optical transceiver

“Quantenna is leading the industry’s transition to Wi-Fi 6, enabling innovations that provide end users with the best Wi-Fi experience,” said Ambroise Popper, vice president, strategy and corporate marketing at Quantenna. “Cortina has a unique solution, and we are thrilled to extend our collaboration and expedite the availability of Wi-Fi 6 advanced capabilities in the market.”

“With this kind of integration, flexibility, and performance, Cortina together with Quantenna continue to lead the home networking innovation,” said Dr. Stewart Wu, vice president of marketing at Cortina Access.”

Let's block ads! (Why?)