Thursday, December 27, 2018

Year-End Tribune TV Blackout Threat for Charter Spectrum Customers

At least six million Charter Spectrum customers could lose access to Tribune Media-owned outlets at 12:01 am on Jan. 1 because the two companies have yet to reach agreement on a retransmission consent contract extension.

Almost three dozen over-the-air stations are impacted, including WGN Chicago, WPIX New York, and KTLA Los Angeles. If the stations are blacked out, customers will also lose access to the NFL playoffs and NCAA basketball games aired on those stations.

“The NFL playoffs begin Jan. 5 and we want football fans in our markets to be able to watch these games and root for their favorite teams—we want to reach an agreement with Spectrum,” said Gary Weitman, Tribune Media’s senior vice president for corporate relations. “We’ve offered Spectrum fair market rates for our top-rated local news, live sports and high-quality entertainment programming, and similarly fair rates for our cable network, WGN America. Spectrum has refused our offer.”

Tribune-owned TV stations

Charter Communications claims it is working hard to find a fair agreement with Tribune, which will likely include a significant rate hike with the broadcast station owner that will eventually be passed on to Spectrum subscribers as part of the Broadcast TV surcharge, now approaching $10 a month in many areas.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Linux commands for measuring disk activity

Linux systems provide a handy suite of commands for helping you see how busy your disks are, not just how full. In this post, we're going to examine five very useful commands for looking into disk activity. Two of the commands (iostat and ioping) may have to be added to your system and these same two commands require you to use sudo privileges, but all five commands provide useful ways to view disk activity.

Probably one of the easiest and most obvious of these commands is dstat.

dtstat

In spite of the fact that the dstat command begins with the letter "d", it provides stats on a lot more than just disk activity. If you want to view just disk activity, you can use the -d option. As shown below, you’ll get a continuous list of disk read/write measurements until you stop the display with a ^c. Note that, after the first report, each subsequent row in the display will report disk activity in the following time interval and the default is only one second.

$ dstat -d
-dsk/total-
 read  writ
 949B   73k
  65k     0    <== first second
   0    24k    <== second second
   0    16k
   0    0 ^C

Including a number after the -d option will set the interval to that number of seconds.

$ dstat -d 10
-dsk/total-
 read  writ
 949B   73k
  65k   81M    <== first five seconds
   0    21k    <== second five second
   0  9011B ^C

Notice that the reported data may be shown in a number of different units -- e.g., M (megabytes), k (kilobytes) and B (bytes).

Without options, the dstat command is going to show you a lot of other information as well -- indicating how the CPU is spending its time, displaying network and paging activity, and reporting on interrupts and context switches.

$ dstat
You did not select any stats, using -cdngy by default.
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  0   0 100   0   0| 949B   73k|   0     0 |   0     3B|  38    65
  0   0 100   0   0|   0     0 | 218B  932B|   0     0 |  53    68
  0   1  99   0   0|   0    16k|  64B  468B|   0     0 |  64    81 ^C

The dstat command provides valuable insights into overall Linux system performance, pretty much replacing a collection of older tools such as vmstat, netstat, iostat, and ifstat with a flexible and powerful command that combines their features. For more insight into the other information that the dstat command can provide, refer to this post on the dstat command.

iostat

The iostat command helps monitor system input/output device loading by observing the time the devices are active in relation to their average transfer rates. It's sometimes used to evaluate the balance of activity between disks.

$ iostat
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_       (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.07    0.01    0.03    0.05    0.00   99.85

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00       1048          0
loop1             0.00         0.00         0.00        365          0
loop2             0.00         0.00         0.00       1056          0
loop3             0.00         0.01         0.00      16169          0
loop4             0.00         0.00         0.00        413          0
loop5             0.00         0.00         0.00       1184          0
loop6             0.00         0.00         0.00       1062          0
loop7             0.00         0.00         0.00       5261          0
sda               1.06         0.89        72.66    2837453  232735080
sdb               0.00         0.02         0.00      48669         40
loop8             0.00         0.00         0.00       1053          0
loop9             0.01         0.01         0.00      18949          0
loop10            0.00         0.00         0.00         56          0
loop11            0.00         0.00         0.00       7090          0
loop12            0.00         0.00         0.00       1160          0
loop13            0.00         0.00         0.00        108          0
loop14            0.00         0.00         0.00       3572          0
loop15            0.01         0.01         0.00      20026          0
loop16            0.00         0.00         0.00         24          0

Of course, all the stats provided on Linux loop devices can clutter the display when you want to focus solely on your disks. The command, however, does provide the -p option which allows you to just look at your disks -- as shown in the commands below.

$ iostat -p sda
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.07    0.01    0.03    0.05    0.00   99.85

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               1.06         0.89        72.54    2843737  232815784
sda1              1.04         0.88        72.54    2821733  232815784

Note that tps refers to transfers per second.

You can also get iostat to provide repeated reports. In the example below, we're getting measurements every five seconds by using the -d option.

$ iostat -p sda -d 5
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               1.06         0.89        72.51    2843749  232834048
sda1              1.04         0.88        72.51    2821745  232834048

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               0.80         0.00        11.20          0         56
sda1              0.80         0.00        11.20          0         56

If you prefer to omit the first (stats since boot) report, add a -y to your command.

$ iostat -p sda -d 5 -y
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               0.80         0.00        11.20          0         56
sda1              0.80         0.00        11.20          0         56

Next, we look at our second disk drive.

$ iostat -p sdb
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.07    0.01    0.03    0.05    0.00   99.85

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb               0.00         0.02         0.00      48669         40
sdb2              0.00         0.00         0.00       4861         40
sdb1              0.00         0.01         0.00      35344          0

iotop

The iotop command is top-like utility for looking at disk I/O. It gathers I/O usage information provided by the Linux kernel so that you can get an idea which processes are most demanding in terms in disk I/O. In the example below, the loop time has been set to 5 seconds. The display will update itself, overwriting the previous output.

$ sudo iotop -d 5
Total DISK READ:         0.00 B/s | Total DISK WRITE:      1585.31 B/s
Current DISK READ:       0.00 B/s | Current DISK WRITE:      12.39 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
32492 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.12 % [kworker/u8:1-ev~_power_efficient]
  208 be/3 root        0.00 B/s 1585.31 B/s  0.00 %  0.11 % [jbd2/sda1-8]
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init splash
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_gp]
    4 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_par_gp]
    8 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [mm_percpu_wq]

ioping

The ioping command is an altogether different type of tool, but it can report disk latency -- how long it takes a disk to respond to requests -- and can be helpful in diagnosing disk problems.

$ sudo ioping /dev/sda1
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup)
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms
^C
--- /dev/sda1 (block device 111.8 GiB) ioping statistics ---
3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s
generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s
min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us

atop

The atop command, like top provides a lot of information on system performance, including some stats on disk activity.

ATOP - butterfly      2018/12/26  17:24:19      37d3h13m------ 10ed
PRC | sys    0.03s | user   0.01s | #proc    179 | #zombie    0 | #exit      6 |
CPU | sys       1% | user      0% | irq       0% | idle    199% | wait      0% |
cpu | sys       1% | user      0% | irq       0% | idle     99% | cpu000 w  0% |
CPL | avg1    0.00 | avg5    0.00 | avg15   0.00 | csw      677 | intr     470 |
MEM | tot     5.8G | free  223.4M | cache   4.6G | buff  253.2M | slab  394.4M |
SWP | tot     2.0G | free    2.0G |              | vmcom   1.9G | vmlim   4.9G |
DSK |          sda | busy      0% | read       0 | write      7 | avio 1.14 ms |
NET | transport    | tcpi 4 | tcpo  stall      8 | udpi 1 | udpo 0swout   2255 |
NET | network      | ipi       10 | ipo 7 | ipfrw      0 | deliv      60.67 ms |
NET | enp0s25   0% | pcki      10 | pcko 8 | si    1 Kbps | so    3 Kbp0.73 ms |

  PID SYSCPU  USRCPU  VGROW   RGROW  ST EXC   THR  S CPUNR   CPU  CMD 1/1673e4 |
 3357  0.01s   0.00s   672K    824K  --   -     1  R     0    0%  atop
 3359  0.01s   0.00s     0K      0K  NE   0     0  E     -    0%  <ps>
 3361  0.00s   0.01s     0K      0K  NE   0     0  E     -    0%  <ps>
 3363  0.01s   0.00s     0K      0K  NE   0     0  E     -    0%  <ps>
31357  0.00s   0.00s     0K      0K  --   -     1  S     1    0%  bash
 3364  0.00s   0.00s  8032K    756K  N-   -     1  S     1    0%  sleep
 2931  0.00s   0.00s     0K      0K  --   -     1  I     1    0%  kworker/u8:2-e
 3356  0.00s   0.00s     0K      0K  -E   0     0  E     -    0%  <sleep>
 3360  0.00s   0.00s     0K      0K  NE   0     0  E     -    0%  <sleep>
 3362  0.00s   0.00s     0K      0K  NE   0     0  E     -    0%  <sleep>

If you want to look at just the disk stats, you can easily manage that with a command like this:

$ atop | grep DSK
$ atop | grep DSK
DSK |          sda | busy      0% | read  122901 | write 3318e3 | avio 0.67 ms |
DSK |          sdb | busy      0% | read    1168 | write    103 | avio 0.73 ms |
DSK |          sda | busy      2% | read       0 | write     92 | avio 2.39 ms |
DSK |          sda | busy      2% | read       0 | write     94 | avio 2.47 ms |
DSK |          sda | busy      2% | read       0 | write     99 | avio 2.26 ms |
DSK |          sda | busy      2% | read       0 | write     94 | avio 2.43 ms |
DSK |          sda | busy      2% | read       0 | write     94 | avio 2.43 ms |
DSK |          sda | busy      2% | read       0 | write     92 | avio 2.43 ms |
^C

Being in the know with disk I/O

Linux provides enough commands to give you good insights into how hard your disks are working and help you focus on potential problems or slowdowns. Hopefully, one of these commands will tell you just what you need to know when it's time to question disk performance. Occasional use of these commands will help ensure that especially busy or slow disks will be obvious when need to check them.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Sandra Henry-Stocker (see source)

Wednesday, December 26, 2018

RTC to Acquire FTTH Pioneer EATEL

RESERVE, LA —Reserve Communications and Computer Corporation (RTC), one of Louisiana’s leading telecommunications companies, announced that it has entered into a definitive agreement to acquire EATEL, a premier provider of telecommunications, internet, video, security, and data center services in Louisiana.

Throughout its history, EATEL has strategically positioned itself as a leading regional provider of fiber-based and data center services through its state-of-the-art network and facilities.

RTC will acquire all of the outstanding membership interests of EATELCORP, L.L.C. from its current owners through a merger transaction. EATELCORP, L.L.C. will be the surviving entity to the merger transaction and a wholly-owned subsidiary of RTC upon closing. The transaction is expected to close in the first half of 2019.

Consolidated Management Teams
Upon consummation of the closing, the senior management teams of RTC and EATEL will be consolidated and the combined RTC and EATEL operations will be headquartered in Gonzales, Louisiana. There are no plans to reduce staffing at any level in either organization and each entity will retain its local identity. Upon consummation of the closing, EATEL and RTC will work jointly to expand opportunities for customers and businesses in southeast Louisiana and the surrounding area.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

Saturday, December 22, 2018

Small Providers Driving Higher Rural Internet Speeds, Adoption

ARLINGTON, VA — Even in the face of persistent challenges, the nation’s independent broadband providers continue to lead the charge in driving deployment of higher internet speeds and greater adoption of broadband services in rural communities, according to a new survey by NTCA – The Rural Broadband Association.

The “NTCA 2018 Broadband/Internet Availability Survey Report” found that NTCA members continue to take substantial steps to replace aging copper networks with fiber connectivity where possible. In turn, broadband speeds made available by NTCA members have increased, with more than 70 percent of survey respondents’ customers having access to 25 Mbps or higher speeds.

The survey results similarly demonstrate gains in rural adoption of better broadband services, with nearly 40 percent of respondents’ customers purchasing broadband at 25 Mbps or higher speeds—up from less than 24 percent in a similar 2016 survey. Almost 16 percent of respondents’ customers now subscribe to services with speeds of 100 Mbps or greater.

Other noteworthy results of the survey include:

  • Despite their significant progress, the challenges of network deployment and ongoing operations in rural areas still hinder smaller carriers’ efforts to deliver higher-speed broadband to many rural residents and businesses in high-cost rural markets. Regulatory and economic concerns were cited as primary challenges, with nearly 30 percent of survey respondents’ customers still lacking access to 25 Mbps broadband service.
  • NTCA members provide critically important, higher-capacity broadband service to nearly all anchor institutions in their communities, including schools, hospitals, and public libraries.
  • Video service is perceived as increasingly important to consumers, yet companies face significant barriers in offering it to their customers. Nearly all respondents pointed to programming costs as a barrier in providing this service; similarly, those considering discontinuing video service attributed this decision to increased programming costs.

“Clearly NTCA members have made great strides in driving both deployment of scalable networks and stimulating adoption of broadband services in their communities,” said NTCA Chief Executive Officer Shirley Bloomfield. “In doing so, they have made significant contributions to the safety, health and well-being of their customers. Although much work remains to be done to advance and sustain broadband in rural America, NTCA members have yet again proven themselves to be leaders in rural broadband and trusted, committed providers for their communities.”

Thirty-one percent of NTCA members participated in the online survey in the spring of 2018. The survey comprised general questions about the respondents’ current operations, competition, marketing efforts and current and planned fiber deployment. The full report is available online.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

Friday, December 21, 2018

Network management must evolve in order to scale container deployments

Applications used to be vertically integrated, monolithic software. Today, that’s changed, as modern applications are composed of separate micro-services that can be quickly brought together and delivered as a single experience. Containers allow for these app components to be spun up significantly faster and run for a shorter period of time providing the ultimate in application agility.  

The use of containers continues to grow. A recent survey from ZK Research found that 64 percent of companies already use containers, with 24 percent planning to adopt them by the end of 2020. (Note: I am an employee of ZK Research.) This trend will cause problems for network professionals if the approach to management does not change.

In a containerized environment, the network plays a crucial role in ensuring the various micro-services can connect to one another. When connectivity problems happen, data or specific services can’t be reached, which causes applications to perform poorly or be unavailable. As the use of containers continues to grow, how networks are managed needs to be modernized to ensure the initiatives companies are looking to achieve with them are met.

Legacy network management methods no longer sufficient

The current model for managing networks, which has been in place for decades, uses sampled data from protocols such as Simple Network Management Protocol (SNMP) and NetFlow. That means data is collected periodically, such as every 30 seconds or every minute, instead of being updated in real time.

Sampled data is fine for looking at long-term trends, such as aggregate data usage for capacity planning, but it’s useless as a troubleshooting tool in highly containerized environments because events that happen between the sampling periods can often be missed. That isn’t an issue with physical servers or virtual machines (VMs), as these tend to have lifespans that are longer than the sampling intervals, but containers can often be spun up and then back down in just a few seconds. Also, VMs move and can be traced, whereas containers are deprecated and can be invisible to traditional management systems.

Container sprawl an emerging problem

Also, highly containerized environments are subject to something called “container sprawl.” Unlike VMs, which can take hours to boot, containers can be spun up almost instantly and then run for a very short period of time. This increases the risk of container sprawl, where containers can be created by almost anyone at any time without the involvement of a centralized administrator.

Also, IT organizations typically run about eight to 10 VMs per physical server but about 25 containers per server, so it’s easy to see how fast container sprawl can occur.

A new approach to managing the network is required — one that can provide end-to-end, real-time intelligence from the host to the switch. Only then will businesses be able to scale their container environments without the risk associated with container sprawl. Network management tools need to adapt and provide visibility into every trace and hop in the container journey instead of being device centric. Traditional management tools have a good understanding of the state of a switch or a router, but management tools need to see every port, VM, host, and switch to be aligned with the way containers operate.

Real-time telemetry mandatory in containerized data centers

To accomplish this, network management tools must evolve and provide granular visibility across network fabrics, as well as insight from the network to the container and everything in between. Instead of using unreliable sampled data, container management requires the use of real-time telemetry that can provide end-to-end visibility and be able to trace the traffic from container to container.

The term “telemetry” is very broad, and some vendors use it to describe legacy data models such as SNMP. Most of these older methods use a pull model where data must be requested. Real telemetry is a push model, similar to flow data. Telemetry also allows for the monitoring of devices with no impact to system performance. This is a huge improvement over older protocols that would often degrade the performance of the network device, which is why it was often turned off or the sampling intervals increased to a point where the data was no longer useful.

Because the telemetry information shows the state of every container and every interface, it provides the insight to understand the relationship between the containers. Management tools that use telemetry provide network and application engineers the visibility they need to design, update, manage, and troubleshoot container networks. It’s important that the management tool be “closed loop” in nature, so it continually improves the accuracy of the product.

Telemetry shines a light on current blind spots

A benefit of telemetry is an ability to trace or see containers across the network to quickly identify blind spots that can’t be seen with legacy tools that rely on sampled data. This is particularly beneficial when dealing with chronic problems that have short lifespans but occur frequently. Without real-time visibility, the problem can often disappear by the time network operations begins the troubleshooting process. Real-time tracing can be used to look for anomalies and enable network professionals to fix problems before they impact the business. 

Containers are coming and coming fast, and legacy network management tools are no longer sufficient because they provide limited data, which leads to incomplete analysis and many blinds spots. The inability to correlate container problems with network issues will lead to application outages or poor performance, which can have a direct impact on user productivity and customer experience, impacting the top and bottom line. 

Network professionals should transition away from older network management systems to those that use telemetry, as this will enable engineers to see every container, the underlying infrastructure, and the relationship among the various elements.  

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Zeus Kerravala (see source)

Cisco patches a critical patch on its software license manager

Cisco this week said it patched a “critical” patch for its Prime License Manager (PLM) software that would let attackers execute random SQL queries.

The Cisco Prime License Manager offers enterprise-wide management of user-based licensing, including license fulfillment.

Released in November, the first version of the Prime License Manager patch caused its own “functional” problems that Cisco was then forced to fix. That patch, called ciscocm.CSCvk30822_v1.0.k3.cop.sgn addressed the SQL vulnerability but caused backup, upgrade and restore problems, and should no longer be used Cisco said.

Cisco wrote that “customers who have previously installed the ciscocm. CSCvk30822_v1.0.k3.cop.sgn patch should upgrade to the ciscocm.CSCvk30822_v2.0.k3.cop.sgn patch to remediate the functional issues. Installing the v2.0 patch will first rollback the v1.0 patch and then install the v2.0 patch.” 

As for the vulnerability that started this process, Cisco says it “is due to a lack of proper validation of user-supplied input in SQL queries. An attacker could exploit this vulnerability by sending crafted HTTP POST requests that contain malicious SQL statements to an affected application. A successful exploit could allow the attacker to modify and delete arbitrary data in the PLM database or gain shell access with the privileges of the postgres [SQL] user.”

The vulnerability impacts Cisco Prime License Manager Releases 11.0.1 and later.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

BrandPost: 5 Steps to Get Ready for 802.11ax

It’s a mobile, cloud and IoT world. Whether it’s a workplace, a classroom or a dorm room, people are using a variety of devices beyond the usual phone and laptop combo—smart TVs, gaming consoles and fitness trackers, for starters. Offices, stores and manufacturing floors are outfitted with sensors that keep us safe, regulate the temperature, water the plants and run the production line. People use a broad variety of applications such as voice and video from their mobile devices. And expectations are at an all-time high. People will simply ditch digital services that don’t work. No one wants jittery presentations and dropped calls. The network experience must be flawless.

The newest Wi-Fi—802.11ax or Wi-Fi 6 as it’s now called—delivers the capacity and reliability needed to deliver on these amazing experiences. This new standard has been positioned as ideal for “dense deployments” like large classrooms, malls and stadiums. That’s because 802.11ax increases throughput by 4X on average compared to 802.11ac. But the definition of a “dense deployment” is evolving. When each person has multiple devices and there are IoT devices everywhere, even a business with 25 or 50 employees can be a dense deployment.

 We have discussed 5 steps to get ready for 802.11ax in this webinar. Getting Ready for 802.11ax

Aruba recently introduced the 510 Series of high-performance 802.11ax APs. The 510 series leverages the full suite of the 802.11ax technology—and more. In addition to the standard capabilities, Aruba offers many other ways to deliver an amazing network experience, including intelligent RF optimization, universal connectivity for IoT, enhanced security and always-on networking.

2019 will see the introduction of the 802.11ax-capable versions of our favorite phones, laptops and IoT devices. It’s a good time to start asking some questions:

  1. When we move to 802.11ax, do we need to upgrade our switches? 11ax APs, due to their higher performance, need higher power, creating concerns that an upgrade to the wireless means an upgrade to the wired network as well. But you can take a measured approach with an innovative capability in Aruba’s 510 Series APs called Intelligent Power Monitoring, which enables you to dynamically turn off pre-selected features you don’t need, such as Bluetooth, to meet the power requirements of the AP. Later you can upgrade to new switches that support 802.3bt standard for 60Watts of power.
  1. What about security? 11ax leverages WPA3 for core security enhancements, including advanced encryption. Support for Enhanced Open makes it possible to securely authenticate users in traditionally open networks like in coffee shops, airports and restaurants. All traffic gets encrypted with Open Wireless Encryption (OWE). To further enhance security, Aruba customers can dynamically segment their networks, so that a single policy is enforced for the client no matter if they are connected via wired or wireless.
  1. What’s your energy plan? You can deliver more Wi-Fi performance to your users and devices without spiking the energy bill. With Aruba’s 510 Series APs and Aruba NetInsight, you can put your APs in deep sleep mode when they’re not being used, such as evenings and weekends.
  1. What are your IoT requirements? Are you supporting IoT devices that use Zigbee or Bluetooth? Since Zigbee, Wi-Fi and Bluetooth cover 74% of IoT use cases, odds are you are probably supporting one of these now or in the near future.
  1. Do you have the flexibility of management in the cloud? Do you want to use the on-premises AirWave to manage your wired and wireless network? Or do you want the convenience and simplicity of cloud-based management like Aruba Central so you can deploy and manage the network from any browser? Aruba APs can also be deployed in Instant mode with a virtual controller or be controller-based.

Employees and customers don’t want to think about the network. They want to focus on their experiences and applications. Migrating to 802.11ax can enable organizations to deal with the unprecedented diversity of devices and people’s expectations for amazing experiences.

Want to learn more?

Watch the 5 Steps to Get Ready for 802.11ax webinar.

https://www.brighttalk.com/webcast/13679/341592

The newest Wi-Fi—802.11ax or Wi-Fi 6 as it’s now called—delivers the capacity and reliability needed to deliver on these amazing experiences. This new standard has been positioned as ideal for “dense deployments” like large classrooms, malls and stadiums. That’s because 802.11ax increases throughput by 4X on average compared to 802.11ac. But the definition of a “dense deployment” is evolving. When each person has multiple devices and there are IoT devices everywhere, even a business with 25 or 50 employees can be a dense deployment. 

We have discussed 5 steps to get ready for 802.11ax in this webinar.

Getting Ready for 802.11ax

Aruba recently introduced the 510 Series of high-performance 802.11ax APs. The 510 series leverages the full suite of the 802.11ax technology—and more. In addition to the standard capabilities, Aruba offers many other ways to deliver an amazing network experience, including intelligent RF optimization, universal connectivity for IoT, enhanced security and always-on networking.

2019 will see the introduction of the 802.11ax-capable versions of our favorite phones, laptops and IoT devices. It’s a good time to start asking some questions:

  1. When we move to 802.11ax, do we need to upgrade our switches? 11ax APs, due to their higher performance, need higher power, creating concerns that an upgrade to the wireless means an upgrade to the wired network as well. But you can take a measured approach with an innovative capability in Aruba’s 510 Series APs called Intelligent Power Monitoring, which enables you to dynamically turn off pre-selected features you don’t need, such as Bluetooth, to meet the power requirements of the AP. Later you can upgrade to new switches that support 802.3bt standard for 60Watts of power.
  1. What about security? 11ax leverages WPA3 for core security enhancements, including advanced encryption. Support for Enhanced Open makes it possible to securely authenticate users in traditionally open networks like in coffee shops, airports and restaurants. All traffic gets encrypted with Open Wireless Encryption (OWE). To further enhance security, Aruba customers can dynamically segment their networks, so that a single policy is enforced for the client no matter if they are connected via wired or wireless.
  1. What’s your energy plan? You can deliver more Wi-Fi performance to your users and devices without spiking the energy bill. With Aruba’s 510 Series APs and Aruba NetInsight, you can put your APs in deep sleep mode when they’re not being used, such as evenings and weekends.
  1. What are your IoT requirements? Are you supporting IoT devices that use Zigbee or Bluetooth? Since Zigbee, Wi-Fi and Bluetooth cover 74% of IoT use cases, odds are you are probably supporting one of these now or in the near future.
  1. Do you have the flexibility of management in the cloud? Do you want to use the on-premises AirWave to manage your wired and wireless network? Or do you want the convenience and simplicity of cloud-based management like Aruba Central so you can deploy and manage the network from any browser? Aruba APs can also be deployed in Instant mode with a virtual controller or be controller-based.

Employees and customers don’t want to think about the network. They want to focus on their experiences and applications. Migrating to 802.11ax can enable organizations to deal with the unprecedented diversity of devices and people’s expectations for amazing experiences.

Want to learn more?

Watch the 5 Steps to Get Ready for 802.11ax webinar.

https://www.brighttalk.com/webcast/13679/341592

 

Let's block ads! (Why?)


Thanks to Brand Post (see source)

Want to use AI and machine learning? You need the right infrastructure

Artificial intelligence (AI) and machine learning (ML) are emerging fields that will transform businesses faster than ever before. In the digital era, success will be based on using analytics to discover key insights locked in the massive volume of data being generated today.

In the past, these insights were discovered using manually intensive analytic methods.  Today, that doesn’t work, as data volumes continue to grow as does the complexity of data. AI and ML are the latest tools for data scientists, enabling them to refine the data into value faster.

Data explosion necessitates the need for AI and ML

Historically, businesses operated with a small set of data generated from large systems of record. Today’s environment is completely different where there are orders of magnitude more devices and systems that generate their own data that can be used in the analysis. The challenge for businesses is that there is far too much data to be analyzed manually. The only way to compete in an increasingly digital world is to use AL and ML.

AI and ML use cases vary by vertical

AI and ML apply across all verticals, although there is no universal “killer application.” Instead there are a number of “deadly” use cases that apply to various industries. Common use cases include:

  • Healthcare – Anomaly detection to diagnose MRIs scans faster
  • Automotive – Classification is used to identify objects in the roadway
  • Retail – Predictions can accurately forecast future sales
  • Contact center – Translation enables agents to converse with people in different languages

The right infrastructure, quality data needed

Regardless of use case, AI/ML success depends on making the right infrastructure choice, which requires understanding the role of data. AI and ML success is largely based on the quality of data fed into the systems. There’s an axiom in the AI industry stating that “bad data leads to bad inferences”— meaning businesses should pay particular attention to how they manage their data. One could extend the axiom to “good data leads to good inferences,” highlighting the need for the right type of infrastructure to ensure the data is “good.”

Data plays a key role in every use case of AI, although the type of data used can vary. For example, innovation can be fueled by having machine learning find insights in the large data lakes being generated by businesses. In fact, it’s possible for businesses to cultivate new thinking inside their organization based on data sciences. The key is to understand the role data plays at every step in the AI/ML workflow. 

AI/ML workflows have the following components:

  • Data collection: Data aggregation, data preparation, data transformation and storage
  • Data science/engineering: Data analysis, data processing, security and governance
  • Training: Model development, validation and data classification
  • Deployment: Execution inferencing

One of the most significant challenges with data is building a data pipeline in real time. Data scientists who conduct exploratory and discovery work with new data sources need to collect, prepare, model and infer. Therefore, IT requires change during each phase and as more data is gathered from more sources. 

It’s also important to note that the workflow is an iterative cycle in which the output of the deployment phase becomes an input to data collection and improves the model. The success of moving data through these phases depends largely on having the right infrastructure.

Key considerations for infrastructure that supports AI and ML

  • Location: AI and ML initiatives are not solely conducted in the cloud nor are they handled on premises. These initiatives should be executed in the location that makes the most sense given the output. For example, a facial recognition system at an airport should conduct the analysis locally, as the time taken to send the information to the cloud and back adds much latency to the process. It’s critical to ensure that infrastructure is deployed in the cloud, in the on-premises data center, and at the edge so the performance of AI initiatives is optimized.
  • Breadth of high-performance infrastructure: As mentioned earlier, AI performance is highly dependent on the underlying infrastructure. For example, graphical processing units (GPUs) can accelerate deep learning by 100 times compared to traditional central processing units (CPUs). Underpowering the server will cause delays in the process, while overpowering wastes money. Whether the strategy is end-to-end or best-of-breed, ensure the compute hardware has the right mix of processing capabilities and high-speed storage. This requires choosing a vendor that has a broad portfolio that can address any phase in the AI process.
  • Validated design: Infrastructure is clearly important, but so is the software that runs on it. Once the software is installed, it can take several months to tune and optimize to fit the underlying hardware. Choose a vendor that has pre-installed the software and has a validated design in order to shorten the deployment time and ensure the performance is optimized.
  • Extension of the data center: AI infrastructure does not live in isolation and should be considered an extension of the current data center. Ideally, businesses should look for a solution that can be managed with their existing tools.
  • End-to-end management: There’s no single “AI in a box” that can be dropped in and turned on to begin the AI process. It’s composed of several moving parts, including servers, storage, networks, and software, with multiple choices at each position. The best solution would be a holistic one that includes all or at least most of the components that could be managed through a single interface.
  • Network infrastructure: When deploying AI, an emphasis is put on GPU-enabled servers, flash storage, and other compute infrastructure. This makes sense, as AI is very processor and storage intensive. However, the storage systems and servers must be fed data that traverses a network. Infrastructure for AI should be considered a “three-legged stool” where the legs are the network, servers, and storage. Each must be equally fast to keep up with each other. A lag in any one of these components can impair performance. The same level of due diligence given to servers and storage should be given to the network.
  • Security: AI often involves extremely sensitive data such as patient records, financial information, and personal data. Having this data breached could be disastrous for the organization. Also, the infusion of bad data could cause the AI system to make incorrect inferences, leading to flawed decisions. The AI infrastructure must be secured from end to end with state-of-the-art technology.
  • Professional services: Although services are not technically considered infrastructure, they should be part of the infrastructure decision. Most organizations, particularly inexperienced ones, won’t have the necessary skills in house to make AI successful. A services partner can deliver the necessary training, advisory, implementation, and optimization services across the AI lifecycle and should be a core component of the deployment.
  • Broad ecosystem: No single AI vendor can provide all technology everywhere. It’s crucial to use a vendor that has a broad ecosystem and can bring together all of the components of AI to deliver a full, turnkey, end-to-end solution. Having to cobble together the components will likely lead to delays and even failures. Choosing a vendor with a strong ecosystem provides a fast path to success.

Historically, AI and ML projects have been run by data science specialists, but that is quickly transitioning to IT professionals as these technologies move into the mainstream. As this transition happens and AI initiatives become more widespread, IT organizations should think more broadly about the infrastructure that enables AI. Instead of purchasing servers, network infrastructure, and other components for specific projects, the goal should be to think more broadly about the business’s needs both today and tomorrow, similar to the way data centers are run today. 

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Zeus Kerravala (see source)

How to boost collaboration between network and security teams

When Tim Callahan came to Aflac four years ago to take on the role of CISO, enterprise security at the insurance giant was embedded deep in the infrastructure team.

One of his first requests of the CIO: Let me extract security out into its own group. Callahan readily admits the culture shift was not easy but believes that the demarcation has actually led to better collaboration.

“Networking and security are distinct roles, and mixing them as a single group is dangerous,” he says. “In our highly regulated industry, we have to show separation of duty.”

Arguing for a walled-off security team is not easy for security leaders amid a shrinking talent pool of qualified security professionals. Analyst firm ESG found that from 2014 to 2018, the percentage of respondents to a global survey on the state of IT claiming a problematic shortage in cybersecurity skills at their organization more than doubled from 23% to 51%.

Callahan maintains, though, that you can restructure into two teams successfully as long as you clearly communicate the objectives of each team, along with the roles and responsibilities team members carry, and are willing to use innovation and automation to supplement human resources.

At Aflac, security owns the responsibility for monitoring the environment, informing the organization of attacks and vulnerabilities, and creating standards and protocols. “We determine the risk through a strong vulnerability management program and then lay out priorities for remediation for the network team to follow,” Callahan says. “Having clear lines fosters respect for each other’s profession and builds a healthier environment overall.”

The Aflac security team uses a Responsibility Assignment Matrix, charting which participants are responsible and/or accountable, need to be coordinated with, and/or need to be informed at different stages of a project life cycle. This only works, though, if security is seen as an essential part of every IT endeavor, not an afterthought, according to Callahan.

“We’re brought in early in the networking team’s development cycle to make sure the code created is truly secure,” he says. “We aren’t finding out just ahead of production so that we’re left to decide if we let it go as ‘insecure’ or get accused of stopping progress.”

Why to keep security distinct from networking

Chris Calvert, co-founder of Respond Software, an automation tool that uses artificial intelligence to simulate the reactions of a security analyst, says it’s important that security doesn’t get lost in the IT shuffle.

“Some of the security operations centers I built put security in with IT, and security would wind up getting kicked

Let's block ads! (Why?)


Thanks to Sandra Gittlen (see source)

That’s It for 2018 ;)

It’s been a long year – over 230 blog posts, 30 live webinar sessions, three online courses, half-dozen workshops, tons of presentations… it’s time Irena and myself disconnect, and so should you.

Wish you a quiet and merry Christmas with your loved ones and all the best in 2019! We’ll be back in early January.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

Thursday, December 20, 2018

Your CenturyLink Internet Access Blocked Until You Acknowledge Their Ad

(Image courtesy of: Rick Snapp)

CenturyLink customers in Utah were rudely interrupted earlier this month by an ad for CenturyLink’s pricey security and content filtering software that left their internet access disabled until they acknowledged reading the ad.

Dear Utah Customer,

Your internet security and experience is important to us at CenturyLink.

The Utah Department of Commerce, Division of Consumer Protection requires CenturyLink to inform you of filtering software available to you. This software can be used to block material that may be deemed harmful to minors.

CenturyLink’s @Ease product is available here and provides the availability of such software.

As a result of the forced ad, all internet activity stopped working until a customer opened a browser session to first discover the notification, then clear it by hitting the “OK” button at the bottom of the screen. This irritated customers who use the internet for more than just web browsing.

One customer told Ars Technica he was watching his Fire TV when streaming suddenly stopped. After failed attempts at troubleshooting, the customer checked his web browser and discovered the notification message. After clicking “OK,” his service resumed.

A CenturyLink spokesperson told KSL News, “As a result of the new law, all CenturyLink high-speed internet customers in Utah must acknowledge a pop-up notice, which provides information about the availability of filtering software, in order to access the internet.”

In fact, according to a detailed report by Ars Technica, CenturyLink falsely claimed that the forced advertisement was required by Utah state law, when in fact the company would be in full compliance simply by notifying such software was available “in a conspicuous manner.”

CenturyLink chose to turn the Utah law to their profitable advantage by exclusively promoting its own product — @Ease, a costly ISP-branded version of Norton Security. CenturyLink recommended customers choose its Advanced package, which costs $14.95 a month. But parental filtering and content blocking tools are not even mentioned on the product comparison page, leaving customers flummoxed about which option to choose.

In effect, CenturyLink captured an audience and held their internet connection hostage — an advantage most advertisers can only dream about. CenturyLink countered that only residential customers had their usage restricted, and that because of the gravity of the situation, extraordinary notification methods were required.

But as Ars points out, no other ISP in the state went to this extreme level (and used it as an opportunity to make more money with self-interested software pitches).

Bill sponsor Sen. Todd Weiler (R), said ISPs were in compliance simply by putting a notice on a monthly bill or sending an e-mail message to customers about the software. Weiler added that ISPs had all of 2018 to comply and most had already done so. AT&T, for example, included the required notice in a monthly bill statement. CenturyLink waited until the last few weeks of the year, and used it as an opportunity to upsell customers to expensive security solutions most do not need.

With the demise of net neutrality, ISPs that were forbidden to block or throttle content for financial gain are now doing so, with a motivation to make even more money from their customers.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

IDG Contributor Network: Can TLS 1.3 and DNSSEC make your network blind?

Domain name system (DNS) over transport layer security (TLS) adds an extra layer of encryption, but in what way does it impact your IP network traffic? The additional layer of encryption indicates controlling what’s happening over the network is likely to become challenging.

Most noticeably it will prevent ISPs and enterprises from monitoring the user’s site activity and will also have negative implications for both; the wide area network (WAN) optimization and SD-WAN vendors.

During a recent call with Sorell Slaymaker, we rolled back in time and discussed how we got here, to a world that will soon be fully encrypted. We started with SSL1.0, which was the original version of HTTPS as opposed to the non-secure HTTP. As an aftermath of evolution, it had many security vulnerabilities. Consequently, we then evolved from SSL 1.1 to TLS 1.2. 

However, even though we were heading in the right direction, there were significant technical disadvantages of using TLS 1.2. One major disadvantage is that you can auto-negotiate backward to use SSL1.0. Evidently, SSL1.0 has numerous known vulnerabilities and the US PCI 3.1 came up with the requirement that you could not use SSL1.0.

Introducing TLS 1.3

The above drawback drove the requirement for TLS 1.3. The implication of TLS1.3 is that it does a few things better. First of all, the setup is faster. Instead of exchanging 5 packets, it exchanges only 2 packets. This certainly improves performance and reduces latency. Also, the certificate name is encrypted, which is another added advantage. Overall, it provides good privacy and performance compared to the previous versions of TLS.

However, encrypting the certificate has a number of implications involved. Often firewalls and network proxies use the certificate name to understand which site you are connecting to. Each connection to various websites has a different certificate. Therefore, if you have the certificate name, you can understand and classify the session.

Today when you do a DNS lookup, you send out a DNS request. The DNS server that responds the fastest gives the IP address which directs your browser to that server. However, when the domain name system security extensions (DNSSEC) is being used, the TLS encrypts the DNS request lookup.

As a result, we end up encrypting the TLS handshake and hiding the certificate name. Essentially, DNSSEC adds cryptographic signatures to existing DNS records.

Network proxies will become useless

The common network architecture is designed for web applications to use Internet relays to either permit or deny, log and understand a user's intended destination. Ultimately, encrypting DNS queries will make network proxies useless.

However, privacy advocates, such as Google, are all for this and will be shipping TLS 1.3 in Android 8.1 and in the coming versions of Google Chrome. In a world of BYOD, it's going to be exhausting to stop this transition.

The process of encryption is making us think differently about the network design. Google amplified the use of TLS by casting sites that were encrypted higher up in the search engine rankings than the non-encrypted sites. Hence, for this reason, it is now estimated that over 70% of the pages in the US are delivered with encryption.

Encrypting and corresponding network implications

Ideally, this has implications for big networks. As a network operator whether you work in the enterprise, or ISP, two questions often surface in mind; how do you identify the session and how do you troubleshoot it? TLS encrypts after the TLS header so you can still look at the TCP session information such as TCP retransmits and window sizing. However, is this enough?

You can't identify and control which sites the users are going to. This way, you lose the ability to control and log. Additionally, network operators can't troubleshoot as they can’t identify which sessions are going where. They can, however, enter through the backdoor by looking at the IP addresses but that's quite challenging especially if you are going somewhere in the cloud, like, AWS.

Factually, the introduction of TLS 1.3 will introduce limitations to caching. A lot of ISPs and enterprises use wide area application services or content distribution services to cache information locally. Common large email attachments, screens for common applications, such as Salesforce can all be cached locally but with TLS 1.3 they cannot be.

Squarely, the most controversial part of 1.3 is the part called the server name indications (SNI) and whether that gets encrypted or not. One of the workarounds is not to encrypt the SNI. The second potential workaround is for cloud applications and others to share their encryption keys to the proxies and ISPs. This way, the information to determine the client usage can be decrypted. The 3rd option is to delay the rollout TLS 1.3.

But why would you delay something that brings so many benefits? TLS 1.3 offers a standard and robust security model; both for the setup and encryption. A major drawback of TLS 1.2 is that you could actually use different encryption algorithms, which are not as secure or, have known vulnerabilities. Contrarily, TLS 1.3 only includes support for algorithms with no known vulnerabilities.

As a matter of fact, we have a conflict of interest. The network operators, ISPs and enterprises don't want to lose visibility, but users and cloud providers like Google are pushing for TLS1.3. Most likely TLS 1.3 will become a reality, which is expected to present many problems to the WAN optimization and SD-WAN vendors.

Effects on WAN optimization and SD-WAN vendors

Today, the WAN optimization market is getting hit hard by the introduction of TLS encrypted traffic. This tough trail is compounded by the fact that there has already been a rapid growth in real-time voice and video traffic over the network. Ultimately, this has led to a decline in the WAN optimization market. As a result, many WAN optimization vendors are entering the SD-WAN market.

However, the SD-WAN market is going to be impacted with the introduction of DNS over TLS. Some SD-WAN vendors need to use DNS to provide automatic application identification schemas.

Authentically, automatic application identification is the process by which SD-WAN vendors examine the TCP/UDP port numbers to identify applications such as web or email traffic. Then DNS is employed to identify the application using the underlying service.

A potent workaround that some SD-WAN vendors can execute is to use the TLS setup process for the application identification and network prioritization parts. Then they can use a session state-aware device to carry out the reverse DNS lookup on the destination IP.

Once a session state-aware device such as a firewall or a proxy is in the traffic path, it tracks and ties all packets in the flow as opposed to simply forwarding them as a layer 3 device would do.

Alternatively, you could use the TLS setup process to extract the DNS name. This too requires the device in the traffic path to be session state-aware. Essentially the ideal workaround is to have a device that is session state-aware.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Matt Conran (see source)

Windstream Sells Fiber Assets in MN and NE for $60.5 Million

LITTLE ROCK, AR — Windstream, a provider of advanced network communications and technology solutions, announced that it has sold certain fiber assets in Minnesota to Arvig Enterprises, a Minnesota-based provider of telecommunications and broadband services. The all-cash transaction is valued at $49.5 million.

Windstream also announced that it has entered into a definitive agreement to sell additional fiber assets in Nebraska to Arvig for $11 million. The Nebraska sale is expected to close in the first quarter of 2019.

As part of the transactions, Windstream will establish a fiber network relationship with Arvig, allowing Windstream to utilize the assets to continue to sell its products and services in Minnesota and Nebraska.

Monetizing Latent Dark Fiber Assets
“These transactions monetize latent dark fiber assets in Minnesota and Nebraska, lower capital requirements in each state and allow us to focus on our core network offerings with minimal change to our operations going forward. The structure also sets a roadmap for future fiber monetization across our footprint,” said Bob Gunderman, chief financial officer and treasurer for Windstream.

“Expanding our broadband footprint is core to our strategic priorities and bolsters our fiber assets throughout Minnesota and across the Midwest,” said David Arvig, vice president and chief operating officer at Arvig. “The additional Nebraska transaction will provide a critical link for our network beyond Minnesota supporting our continued growth throughout the region.”

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

AFL Launches New Plenum Tight Buffered Fiber Optic Cable

SPARTANBURG, SC — AFL, an international manufacture of fiber optic cable, equipment and accessories, introduces its high fiber count Indoor/Outdoor Plenum tight buffered cables. Ideal for campus environments, the helically-stranded design is available from 36 to 72 fibers. The sub-units and outer sheath contain a UV stabilizer and anti-fungus protection for use in outdoor applications.

The OFNP cable features water-blocked sub-units which meet water penetration requirements of GR-20-CORE to help ensure that any damage to the cable is restricted to a repairable length of several meters. The outer jacket is moisture-resistant, fungus-resistant and UV-resistant for outdoor use, and can be used in all environments including riser, general inside plant and outside plant.

The cables are tested to meet or exceed EIA/TIA 568-A/GR-409-CORE and ICEA-S-104-696, and are compliant to Directive 2002/95/EC (RoHS).

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

GigSouth, Southern Telecom to Expand Rural Broadband Network

ATLANTA — Southern Telecom (STI), which provides long-haul and metropolitan dark fiber connecting Atlanta with other cities throughout the Southeast, announces its partnership with GigSouth, a full service, carrier-neutral fiber provider and engineering consultant for rural broadband development headquartered in Atlanta. GigSouth has entered a leasing agreement with STI for dark fiber that will expand its current network within target rural areas in the Southeast US, utilizing STI’s metro dark fiber service and backbone fiber network.

Unlimited Bandwidth Potential
“GigSouth is changing the game, flipping the market on its head, jumping a couple generations of service offerings, lowering pricing so that big data is affordable and providing our customers with white glove support while doing it,” explains Kevin Paulk, chief technology officer of GigSouth. “STI’s metro dark fiber service supports our mission and then some. With STI, we basically have an unlimited amount of bandwidth potential, which allows us to lower pricing on our existing Gigabit internet offering and offer connectivity that is beyond gigabit to residential and commercial customers. We reached 10 Gbps two years ago and are proud to announce our new 100 Gbps offering, to be launched by the end of the year. STI’s fiber is allowing us to make this huge jump, which guarantees we will be ready for the next surge in data demands.”

Southern Telecom presently markets or owns more than 2,600 route miles in the southeast anchored by a robust conduit and dark fiber metro network throughout Atlanta.

Providing Choice for Consumers
“The majority of consumers in rural areas typically only have one choice when it comes to internet access, and this lack of competition often results in higher pricing, subpar support and slow service,” states Michael Lamar, carrier account manager for STI. “STI is honored to provide our dark fiber services and fiber optic network to GigSouth as the company works hard to change this dynamic, grow its network and stay ahead of the curve and demand for high-speed, reliable bandwidth in underserved rural communities.”

“As we embark on this partnership with STI, we are focusing on the Atlanta region initially, and have plans to expand our network to many other local markets via STI fiber,” concludes Paulk. “GigSouth is very thankful and appreciative that a large utility-owned company like STI is devoting its time and resources to help another private telco grow to reach more people. Without STI, we would not have an affordable, solid, dark fiber solution. We look forward to a long, mutually beneficial relationship with STI and reaching more satisfied customers as a result.”

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

10 questions to ask when selecting enterprise IoT solutions

Wednesday, December 19, 2018

Zero-Touch Provisioning with Patrick Ogenstad (Part 2)

Last week we published the first half of interview with Patrick Ogenstad, guest speaker in Spring 2019 Building Network Automation Solutions online course (register here). Here’s the second half.

ZTP is about provisioning. Can this include configuration as well?

You could argue that provisioning is a form of configuration and in that sense, provisioning can certainly include configuration. If your ZTP solution is good at configuration management is another question.

I would say that the goal of the ZTP system should be to get the device in a state so that it can be handed over to the configuration management system. It might be that you use the same tool for everything. There are rather few tools out there, however, which are a master of all trades.

ZTP can be used internally connecting to an internal provisioning server, and it can be used externally connecting to an external provisioning server. Some commercial products use ZTP in connection with a vendor-controlled cloud-based provisioning server. What are the security risks if a vendor can push data to customer equipment?

Microsoft had a great article many years ago called Ten Immutable Laws of Security, in which one of those laws state that a computer is only as secure as the administrator is trustworthy. I'm not trying to say that the operators behind these solutions are untrustworthy, just that each organization has to take into account who they trust with what.

There will always be security risks involved regardless of what we do. The attack surface will be different against a service like this; on the other hand, it doesn't mean that it is worse than what most companies have today. A cloud-based service can be helpful to set up a new office where you don't have a network in place. However, as mentioned hereinabove, it still requires that the Internet connection uses DHCP, if we want to keep it as a ZTP install that is.

What tools are available to develop a ZTP solution?

If we are talking about creating a custom solution, there are a lot of open source tools that can serve as a base. DHCP will be needed, so ISC DHCP or Kea are good alternatives. For devices that support ZTP using a web server, Nginx could be helpful to serve files, but you can also write your web application using Flask or Django.

I would, however, recommend starting by stepping away from all of the tools and instead look at the process that you currently use to install devices. Not just getting the initial configuration on the box after it has powered up. Look at what steps need to get done for the new device to work as intended. That the device has the correct configuration is one thing, but it might also mean that it gets added to a network monitoring system. Start by writing all the steps that need to get done and then look at what tools can solve those problems.

Are there any standards yet?

While DHCP and TFTP have been around a long time as regards ZTP, there has as far as I know never been a standard discussion specifically about how to provision devices. However, looking into the future, there is an IETF draft called Zero Touch Provisioning for Network Devices (https://ift.tt/2EEblm9) that looks interesting. I wouldn't dare to guess as to when we might have devices that would support that concept.

How would you start and structure a ZTP project?

I would start by writing down all the manual steps needed to install a new device and integrate it into the network. Hopefully, I would have colleagues to talk to about this as I'm bound to miss some of the steps.

Then, I would look at each task and try to find a solution that could automate that step. If I couldn't get my hands on a tool for a specific part, I would write my own. I would start by trying to solve the easy problems first and be happy even if the ZTP solution would require a few manual steps to begin with and then work from there to improve it.


Want to know more? Patrick will talk about ZTP in Spring 2019 Building Network Automation Solutions online course (register here). In the meantime, enjoy his ZTP tutorial.

Let's block ads! (Why?)


Thanks to Ivan Pepelnjak (see source)

The Interconnected Future is Here. Are You Ready?

Congressman-Elect Brindisi Calls on Regulators To Put Their Foot Down on Spectrum

Brindisi

Congressman-elect Anthony Brindisi has a message for the New York State Public Service Commission: stop giving Charter Spectrum extensions.

“Today, I am asking the Public Service Commission to make Feb. 11 the absolute final deadline for Charter Spectrum to present its plan to give customers the service they deserve or it is time to show them the exit door here in New York State,” Brindisi said at a press event. ““No one’s losing their cable. They’re not just going to turn the switch off and leave. The would have to bring in another provider. I would actually like to see a few providers so people can have some choices in their internet and cable providers. Competition, I believe, is a good thing.”

Brindisi is reacting to repeated extensions of the PSC’s order requiring Charter Spectrum to file an orderly exit plan with state regulators so that an alternate cable provider can be found. Private negotiations between the PSC staff and Charter officials have resulted in several deadline extensions, the latest granted until after the new year. Observers expect Charter and the PSC will settle the case, with the cable company agreeing to pay a fine and fulfilling its commitments in return for the Commission rescinding its July order canceling Charter’s merger with Time Warner Cable.

Brindisi, a Democrat from the Utica area, made Charter Spectrum a prominent campaign issue, and even ran ads against the cable company that Spectrum initially refused to air.

Brindisi says he would be okay with Spectrum remaining in the state as long as it meets its agreements, but he is not very optimistic that will happen.

“Number one — they committed that they were going to expand their service into underserved areas,” Brindisi said. “Number two — they weren’t going to raise rates and they have not complied or lived up to their agreement so I think action has to be taken.”

WUTR in Utica reports Congressman-elect Anthony Brindisi’s patience is wearing thin on Charter Spectrum. (0:48)

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

AT&T Drops Data Caps for Free if You Subscribe to DirecTV Now

AT&T customers are telling Stop the Cap! the company is emailing their broadband customers to alert them they now qualify for unlimited internet access because they also happen to subscribe to DirecTV Now, AT&T’s streaming service targeting cord cutters.

“Good news about your internet service! Because you also added DIRECTV NOW℠ to your internet service, we’re giving you unlimited home internet data at no additional cost.”

AT&T normally charges customers an extra $30 a month to remove their 1,000 GB data cap.

The move has some net neutrality implications, because AT&T is favoring its own streaming service over the competition, which includes Sling TV, Hulu TV, PlayStation Vue, and other similar services. If a customer subscribes to Hulu TV, the 1 TB cap remains in force. If they switch to DirecTV Now, the cap is gone completely.

AT&T has undoubtedly heard from customers concerned about streaming video chewing up their data allowance. With AT&T’s DirecTV on the verge of launching a streaming equivalent of its satellite TV service, data caps are probably bad for business and could deter customers from switching.

It is yet the latest evidence that data caps are more about marketing and revenue than technical necessity.

Let's block ads! (Why?)


Thanks to Phillip Dampier (see source)

Industrial IoT, fog-networking groups merge to gain influence

Looking to hasten the adoption of all things edge computing, fog and Industrial Internet of Things, the OpenFog Consortium (OFC) and the Industrial Internet Consortium (IIC) are combining forces.

The IIC membership, which includes Cisco, Juniper and Microsoft looks to transform business and society by accelerating the Industrial Internet of Things, while the OFC addresses fog computing and the bandwidth, latency and communications challenges associated with IoT, 5G and AI applications.

For example, earlier this year the OFC was the driving influence behind a move to amplify the use of fog computing. The IEEE defined a standard that laid the official groundwork to ensure that devices, sensors, monitors and services are interoperable and will work together to process the seemingly boundless data streams that will come from IoT, 5G and AI systems.

The standard, known as IEEE 1934, was largely developed over the past two years by the OFC, which includes Arm, Cisco, Dell, Intel, Microsoft and Princeton University.  IEEE 1934 defines fog computing as “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the cloud-to-things continuum."

“By expanding our pool of resources and expert collaborators, we will continue to accelerate the adoption of not only fog, but a wealth of technologies that provide the underpinnings to IoT, AI and 5G,” wrote Matt Vasey, Chairman and President, OpenFog Consortium in a blog about the merger.

“Machines, things, and devices are becoming increasingly intelligent, seamlessly connected, and capable of massive storage with the ability to be autonomous and self-aware. Robots, drones and self-driving cars are early indicators of small and mobile clouds. Distributed intelligence that interacts directly with the world and is immersive with all aspects of their surrounding is the concept behind fog.”

Merging the two groups is a natural fit and helps consolidate an overly fragmented collection of groups striving to create standards in the large IoT market, said Christian Renaud, Research Vice President, Internet of Things, 451 Research in a blog about the unification. 

“Both consortia share a pragmatic approach of developing specifications and testbeds and aligning the interests of a diverse set of stakeholders from throughout the broader community. This has evolved the messages and roadmaps of vendors, shaped academic research agendas and removed risk for end-user adopters of the technologies,” Renaud said. 

Since their founding, both the IIC (214) and OFC (2015), both organizations have helped drive development of IoT, "transforming the narrative from confusion and (perpetually) potential future opportunities to real progress and real value,” Renaud wrote.

Vasey wrote that plans for 2019 include increasing investment in international testbeds and technical working groups. The new group's technical committees will continue to build out testbeds and define optimal network implementations for core vertical markets as well as horizontal features. There will be a renewed push toward testbed validation and certification programs.

The consortium will continue to identify and expand prototypical use cases describing best-practice architectures in real-world settings.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

Industrial IoT, fog computing consortia wed to bolster industry influence

Looking to hasten the adoption of all things edge computing, fog and Industrial Internet of Things, the OpenFog Consortium (OFC) and the Industrial Internet Consortium (IIC) are combining forces.

The IIC membership, which includes Cisco, Juniper and Microsoft looks to transform business and society by accelerating the Industrial Internet of Things, while the OFC addresses fog computing and the bandwidth, latency and communications challenges associated with IoT, 5G and AI applications.

For example, earlier this year the OFC was the driving influence behind a move to amplify the use of fog computing. The IEEE defined a standard that laid the official groundwork to ensure that devices, sensors, monitors and services are interoperable and will work together to process the seemingly boundless data streams that will come from IoT, 5G and AI systems.

The standard, known as IEEE 1934, was largely developed over the past two years by the OFC, which includes ARM, Cisco, Dell, Intel, Microsoft and Princeton University.  IEEE 1934 defines fog computing as “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the cloud-to-things continuum."

“By expanding our pool of resources and expert collaborators, we will continue to accelerate the adoption of not only fog, but a wealth of technologies that provide the underpinnings to IoT, AI and 5G,” wrote Matt Vasey, Chairman and President, OpenFog Consortium in a blog about the merger.

“Machines, things, and devices are becoming increasingly intelligent, seamlessly connected, and capable of massive storage with the ability to be autonomous and self-aware. Robots, drones and self-driving cars are early indicators of small and mobile clouds. Distributed intelligence that interacts directly with the world and is immersive with all aspects of their surrounding is the concept behind fog.”

Merging the two groups is a natural fit and helps consolidate an overly fragmented collection of groups striving to create standards in the large IoT market, said Christian Renaud, Research Vice President, Internet of Things, 451 Research in a blog about the unification. 

“Both consortia share a pragmatic approach of developing specifications and testbeds and aligning the interests of a diverse set of stakeholders from throughout the broader community. This has evolved the messages and roadmaps of vendors, shaped academic research agendas and removed risk for end-user adopters of the technologies,” Renaud said. 

Since their founding, both the IIC (214) and OFC (2015), both organizations have helped drive development of IoT, "transforming the narrative from confusion and (perpetually) potential future opportunities to real progress and real value,” Renaud wrote.

Vasey wrote that plans for 2019 include increasing investment in international testbeds and technical working groups. The new group's technical committees will continue to build out testbeds and define optimal network implementations for core vertical markets as well as horizontal features. There will be a renewed push toward testbed validation and certification programs.

The consortium will continue to identify and expand prototypical use cases describing best-practice architectures in real-world settings.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Let's block ads! (Why?)


Thanks to Michael Cooney (see source)

The Information Economy Currency: Data

Televisa Acquires Part of Axtel’s Residential FTTH Business

MEXICO CITY — Grupo Televisa, a large media company in the Spanish-speaking world, announced that it has acquired Axtel’s residential fiber-to-the-home business and related assets in Mexico City, Zapopan, Monterrey, Aguascalientes, San Luis Potosi and Ciudad Juarez.

The assets acquired comprise 553,226 revenue generating units (RGUs: a subscriber to a specific service. Subscribers to internet, phone and video services are counted as three RGUs), consisting of 97,622 video, 227,802 broadband, and 227,802 voice revenue generating units.

The total value of the transaction amounts to MXN (Mexican pesos) 4,713 million, (USD $234.3) and, as of September 30, 2018, the revenue and operating segment income associated to these assets over the prior twelve months reached MXN 1,900 million ($94.5 million) and MXN 631 million ($31.3), respectively.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)

Transition Networks Introduces PCIe Dual Speed Ethernet Fiber Network Interface Card

MINNEAPOLIS, MN — Transition Networks, a provider of edge connectivity solutions, announced it has expanded its offering for government and enterprise fiber-to-the-desktop (FTTD) applications through the addition of a new PCIe Dual Speed 100/1000 Mbps Ethernet Fiber Network Interface Card (NIC) with an open small form-factor pluggable (SFP) design, offering flexible connectivity and simplified ordering and logistics.

Accommodates Either Multimode or Single Mode Fiber Cabling
The Transition Networks dual speed N-GXE-SFP-02 fully complies with IEEE 802.3u and 802.3z standards. The NIC auto-negotiates between 100 and 1000 Mbps Ethernet network speeds based on the inserted SFP, making upgrading networks from Fast Ethernet to Gigabit Ethernet speeds simple since the same NIC can be used in either environment. The card supports industry-standard IEEE 802.3 SFPs to accommodate either multimode or single mode fiber optic cabling. This flexibility means government agencies and corporations can stock one NIC for much of their 100/1000 Mbps fiber optic network needs.

Supports Wake-on-LAN
The new NIC also supports Wake-on-LAN (WoL), allowing a PC to be woken up by a network message. WoL allows network administrators to perform off-hours maintenance at a remote site without having to dispatch a technician. It also provides full-duplex bandwidth capacity to support high-end servers. With advanced functions like VLAN filtering, link aggregation, smart load balancing, failover, support for PXE and UEFI Boot and more, the network adapter provides enhanced performance, flexible configuration and secure networking.

The N-GXE-xx-02 Series NICs are compatible with Windows 10 in addition to Windows 7, 8 and 8.1, Windows Server 2008, 2008 R2, and 2012, as well as Linux and FreeBSD.

NICs Optimized for FTTD Applications
The N-GXE-SFP-02 NIC is the most recent addition to Transition Networks’ extensive family of NICs that are optimized for FTTD applications. FTTD is popular with government agencies and enterprises that are concerned that copper network connections leave them vulnerable to data theft via tapping the network connection or signal radiation. In addition to the PCIe-based NICs, Transition Networks also offers FTTD NICs for laptops and other small form factor PCs supporting USB and M.2 bus technology, as well as PoE NICs for powering VoIP phones or other devices at the desktop. The company also offers a wide assortment of SFPs compliant with the Multi-Sourcing Agreement (MSA) ensuring interoperability with all other MSA compliant networking devices, as well as Cisco, HP and Juniper compatible SFPs supporting a variety of data speeds and distance requirements.

Availability
The new Transition Networks N-GXE-SFP-02 version is now available to order and will be in stock soon. The other fixed optic fiber NICs from the N-GXE-xx-02 Series are in stock now. For more information visit www.transition.com.

Let's block ads! (Why?)


Thanks to BBC Wires (see source)