Network Computing Weekend Report: Issue Highlights - DOTNET

Network Computing Weekend Report: Issue Highlights

TOP STORY: VMware Simplifies, Automates Virtual/Cloud Management

VMware is expanding its management portfolio with vCenter Operations Management Suite by integrating with VMware vCenter Capacity IQ and VMware vCenter Configuration Manager for improved performance, capacity and configuration management. VMware revamped its virtualization management suite last October adding the VMware vFabric Application Management and VMware IT Business Management tools. The new enhancements focus on embedding and integrating management tools into the platform streamlining processes and applying analytics so customers can achieve better economics with their cloud computing deployments.
With VMware virtualization becoming increasingly pervasive within enterprise data centers, vCenter Operations Management Suite provides customers with detailed views into their infrastructures while also providing standardized methodologies for more effectively managing virtualized systems, said Charles King, principal analyst Pund-IT. He noted there are two main elements to the product's value proposition.
"First, having a single, unified management solution offers enterprises the chance of gaining significant cost and labor efficiencies in data center operations," King said. "In addition, as companies shift IT operations more and more toward cloud computing methodologies, having a single, virtualization/cloud centric management platform for all their x86 systems will become even more attractive."
VMware is also driving intelligence into the virtualization environment and driving further efficiency with vCenter Operations Management Suite, said Mark Bowker, senior analyst at Enterprise Strategy Group. Although VMware's value proposition to customers has primarily been focused on reduction of capex, vCenter Operations Management Suite is focused on improving operational efficiency and streamlined opex, he added.
"Enterprises that have mature server virtualization deployments that include workloads beyond basic IT services require more management functionality to reliably deliver applications and efficiently utilize the underlying IT infrastructure," Bowker said. "At small scale, low consolidation and less critical workloads, the standard vCenter management console offers most of the features and functionality an administrator requires. As IT rapidly scales virtualized environments, drastically increases consolidation ratios and focuses its efforts on the next tier of applications, they need improved visibility, reporting and analytics that ultimately are geared towards driving automation into IT processes. vCenter Ops is focused on exactly this."
According to VMware, the integration between the different products will aid customers in identifying emerging problems, "right sizing" infrastructure resources, as well as identifying and remediating performance issues that are caused by changes in configuration. VMware is not alone in adding performance management features into thier management platforms. Microsoft's new System Center 2012 includes a rich set of tools for monitoring application performance and troubleshooting performance issues.
Additionally, VMware's management suite is equipped with a dashboard that's new to the management products portfolio. The dashboard provides greater depth into the health, risk and efficiency of cloud infrastructure. Smart alerts were included so IT administrators will receive notifications of emerging health, performance and capacity issues so they can be more proactive in the remediation of problems. Also included is automated root cause analysis for the identification of offending metrics across all layers of the infrastructure.
For applications management, VMware has included application awareness features that automatically discover and map the relationships and dependencies between applications and infrastructure components. These features are intended to help customers optimize infrastructure operations based on the individual application's requirements.
The updated VMware vCenter Operations Management Suite is available in four editions to address SMB to large enterprise requirements, with list prices starting at $50 per VM. Updates are also available as a free upgrade for current VMware vCenter Operations customers.
Checkout your peers view of IT Automation by downloading Research: IT Automation by subscribing to Network Computing Reports (free, registration required).

MORE NEWS: Enterasys Addresses Wired-Wireless Pain


Network equipment vendor Enterasys is tackling the growing problem of managing wired and wireless devices with the latest addition to its suite of fabric network management technology, the OneFabric Edge Architecture. The combined wired-wireless management fabric relieves a number of network management headaches, especially in situations where the wired network is often managed by one vendor and the wireless network by another, says the company.
"Wired is a pain in the butt now," said Craig Mathias, a principal analyst at Farpoint Group. With wireless devices ubiquitous in the workplace, he wonders why anyone would use a wired network.
For now, though, wired and wireless networks have to work together and need to be merged. "The idea of thinking of the network as a single unified entity ... is one of the key emerging themes that I think you're going to see a lot of emphasis on over the next couple of years," Mathias said.
The OneFabric Edge features an end-to-end integration of the wireless local area network (WLAN) and the wired infrastructure and integrates Enterasys' security and management features with application-aware capabilities that aid compliance and service level agreements (SLAs). The product introduces what Enterasys calls the Wireless Services Engine (WiSE), a WLAN controller for application services, which the company said gives customers greater flexibility for deploying edge access in virtual, physical and cloud environments.
Lastly, the OneFabric Edge introduces the K-Series modular switch, which provides visibility into network traffic to determine location, identification, and overall management capabilities of the converged wired and wireless network. Enterasys says the K-Series switch helps manage environments in which employees bring their own wireless devices into work to run on the corporate network.
Both the Enterasys data center fabric and edge fabric systems are jointly managed by the OneFabric Control Center management console.
While applauding Enterasys' innovation, Mathias said it faces considerable competition in the data center fabric space from companies such as Cisco Systems, Juniper Networks, Brocade and others -- as well as in the edge network space.

ARCHITECTURE: Time To Reconsider The Data Center

As enterprises push out data centers into cloud-based computing and virtual applications, traditional data center planning and practices haven't necessarily kept pace. Is it time to reshape definitions of classic brick-and-mortar data centers into a new computing concept with different performance and total cost of ownership expectations? If nothing else, the business cases now driving data center services are beginning to demand it. "As a global organization, we know that we must not only provide 24/7 IT, but also enterprise-strength IT support on a follow-the-sun basis," says John Heller, CIO of Caterpillar.
Demands for both 24/7 computing availability and also IT "A team" availability come at a time when 65% of CIOs are using or planning to use cloud in their data center strategies. However, 55% are still uncommitted to IT asset management beyond physical data centers, as revealed in an IBM survey of IT executives that was recently shared in a briefing with industry analysts. The primary reason for trepidation is concern about security.
Public cloud providers haven't done much to change this conception. This past May, many of Google's services, such as Gmail, Search, Maps, Analytics and YouTube, suffered an outage, leading IT executives to wonder what would have happened if another cloud service, like Google Apps, had experienced a similar outage. This kind of downtime is not acceptable for any enterprise application--even a non-mission-critical one. In August of 2011, lightening knocked out power sources in Europe and caused downtime for many Amazon customers using cloud services such as the Amazon Elastic Compute Cloud (EC2). This was compounded a day later when a problem in a clean-up process within Amazon's Elastic Block Store (EBS) service deleted some customer data.
Situations like these do not inspire confidence in CIOs when it comes to entrusting enterprise services to the cloud, and they are one of several reasons why those enterprises deploying cloud are beginning the journey with private clouds, with a game plan that allows for expansion into a hybrid (private-public) cloud concept as cloud technologies and practices become more mature.
Regardless of the evolutionary path cloud must take, it has already impacted traditional data center thinking to the point where most CIOs and data center managers understand that the data center must be reshaped as IT moves forward.
Here are the several key data center challenges facing CIOs as we move into 2012:
Conversion to a service vulture
Cloud computing and on-demand resource provisioning is propelling IT into a service center with measurable SLAs (service level agreements) that evaluate IT performance based on responsiveness to user help calls as well as on mean time to repair (MTTR) in situations of problem resolution. Vendors are working overtime to ensure that the tools for measuring performance and taking corrective action in a service-oriented environment are there. However, the more troubling aspect of this for IT decision makers is how to move their staffs forward.
While IT has pretty much shed its 1980s-vintage reputation of being a glass house, it is still a control-oriented discipline that operates in a world of things, not people. Developing people skills so you can work with your end users as if they were outside customers (that is, you do not take them for granted) and working together with other disciplines in the IT organization that have formerly been siloed is not accomplished overnight.
In the new service-oriented data center, it will be incumbent on IT to deliver premium service to expectant end users. This service comes not only in the form of better uptime and faster processing and response/repair times. It also demands excellent communications skills and the ability to follow up with end users on outstanding issues before they finally have to pick up the phone and call you because they haven't heard from you.
End-to-end application management
The emphasis on service management means that IT must have end-to-end visibility of applications and workload performance if performance problems are going to be detected and resolved. Applications and workloads now routinely cross multiple platforms and operating environments in both traditional and cloud-based environments, which requires system management software that is able to track an application at every juncture of performance. This presents a challenge if different IT professionals (for example, DBAs, network administrators and system programmers) use different tool sets for troubleshooting. These different sets of tools tend to present application data differently, so there is no unified view of the data. The consequence can be staff finger-pointing and deadlock on why a given application isn't performing well. Meanwhile, the business waits for the problem to be solved.

REVIEWS & WORKSHOPS: Freeware Increases RJ Lee's Management Efficiency

Faced with rapid growth and increases in the amount and complexity of data and its IT operations, RJ Lee Group went looking for a way to simplify its computing infrastructure. The company ended up selecting Spiceworks as an alternative to adding staff or spending a lot of money on network and system management software.
"By moving to Spiceworks, we were able to manage our infrastructure more effectively without increasing our expenses," says Justin Davison, senior systems engineer at RJ Lee Group. In business since 1980, the company is an industrial forensics laboratory offering specialized materials characterization, forensic engineering and information management services. For example, it helped the United States Environmental Protection Agency (EPA) develop a method to analyze asbestos.
The company's research-based services chew up a lot of IT resources. It has 30 Tbytes of data stored on a storage area network (SAN), and servers primarily running Microsoft's Windows operating system. The 300-person operation works mainly in Monroeville, Pa., but has spread its wings to more than half a dozen satellite locations, including New York and Quebec City.
The small IT staff oversees the dispersed computing infrastructure. Traditionally, this group relied on each component's (server, router) inherent management functions to ensure that its applications were up and its network connections were functioning well.
By 2008, that approach was proving to be inadequate. "Our applications and IT infrastructure were growing and becoming more dispersed," says Davison. Consequently, tasks such as determining what might be causing a slowdown on a network link were taking more time to complete. "We needed a tool that would automate some of our routine administrative tasks," he says.
There was no shortage of options available, but the company wanted to keeps its expenses as low as possible. Davison started searching on the Web for free management tools, and Spiceworks emerged as an intriguing option because of its all-encompassing nature. Although it began life in 2006 as a basic network inventory and scan tool, the offering has grown into a full-fledged help desk and IT support community with more than 1.5 million users. To stand out from the competition, it uses an advertising-based model: Customers do not pay for the product but are exposed to Google-like advertisements.
"Spiceworks is like a Swiss Army knife for system and network management," notes Davison. The product includes a series of modules that can be used autonomously or in conjunction with one another.
After making the decision to go with Spiceworks in the spring of 2008, RJ Lee had the product up and running in a few weeks. "Spiceworks includes an intuitive user interface, so the initial configuration was straightforward," he says.

NETWORK COMPUTING PRO REPORTS & TECHWEB WHITEPAPERS: Fundamentals: How to Write an Effective SAN RFI


What's Driving Storage Requirements?
Pundits are hailing 2012 as the year of big data--as if that's a good thing. In our InformationWeek 2012 Big Data Survey of business technology professionals at organizations with a minimum of 10 TB of data, just 15% rate their shops as very effective at managing these large data sets. This is placing storage teams in the spotlight during IT strategy and budgeting sessions. Are you ready when the CIO asks about your plan?
The IT equivalent of duct tape and spit isn't a strategy. Not only must our systems cope with unrelenting demands for added capacity, they must adapt to the changing application hosting environment: Dedicated servers are out, virtualization is in. Our databases need lightning-fast access. Convergence is the key word for network architects. As these and other dynamics put increasing stress on legacy storage systems, many IT organizations are looking beyond just adding hardware to rethinking their entire storage and data architectures.
Upgrades of this magnitude need to follow a well-thought-out, comprehensive strategy that sets parameters for product and vendor evaluations. It’s a process that benefits from rigor and formality--when it comes time to cut a six- or seven-figure check, you don't want to be tossing the dice and hoping for the best. To tip the odds in your favor, make vendors address your needs on your terms through a formal request process.
Think of it as the IT version of Survivor, and the challenge begins with a hard-hitting request for information. (S4110212)

TECHCENTER: PRIVATE CLOUD IBM And NEC Leverage OpenFlow For High-Performance Networking

IBM and NEC are collaborating on high-performance OpenFlow deployments. OpenFlow, developed at Stanford University, has enjoyed acceptance in university networks because an OpenFlow network can run alongside the campus production network without impacting it. In 2011, OpenFlow broke out of its education niche into the mainstream with announcements from Big Switch, Fulcrum and NEC. IBM's and NEC's announcement is a proof point that OpenFlow has a role in enterprise IT and can be used in high-performance applications.
There are a number of myths surrounding OpenFlow, including that there is a delay on the first packet of a flow to perform a lookup and that the controller is a single point of failure. Both are easily addressed through sound management practices. In fact, the upsides of using OpenFlow--such as simplified traffic management, policy-based networking that creates paths through the network based on higher-level decisions than the destination address, and software-defined networking where there is tight integration between applications and network configuration--can far outweigh any downsides. The IBM and NEC announcement describes how enterprises are overcoming these obstacles in OpenFlow on their production networks
One customer of the combined IBM and NEC products is Selerity which provides financial information from primary sources to their subscribers. Their service-level commitments are on the order of microseconds, required so that all subscribers receive the same information at the same time. In addition, Selerity has to manage subscription entitlements to its customers to ensure they are getting what they paid for. Selerity's entitlement application needs to make those decisions and dispatch the data in near real time. The challenge Selerity faces in meeting all of those competing goals is in maintaining low latency and traffic separation.
Selerity satisfied those requirements using a convoluted set of VLANs and high-end firewalls to forward traffic to the proper locations, or by using an application-level process to make the forwarding decisions. In either case, the solution was complex, inflexible and expensive. Adding a new subscription to a customer meant making a number of changes to networking equipment, which took time and was error-prone.
Using OpenFlow on NEC's Programmable Flow Controller, Selerity was able to move the forwarding decision off the servers and firewall/switch layer into an OpenFlow-controlled network. Using flow rules defined once on the Programmable Flow Controller, the UDP packets coming from Selerity's servers are rewritten, added to a multicast group and forwarded to the destination ports corresponding with individual customers in a few micro-seconds. Selerity ensures that the correct data goes only to intended customers and that all of the customers receive the data at the same time. Selerity was also able to easily add more redundancy to its delivery network since an OpenFlow network isn't hobbled by Ethernet constraints like having a loop-free network.
Selerity's application and SLA requirements are unique to the financial industry, but many enterprises have similar demands that could be addressed using an OpenFlow-managed network.
IBM and NEC also described unnamed customers using OpenFlow to solve common issues such as forwarding network traffic to multiple analysis devices and forwarding traffic to load balancers. Companies like Anue Systems, Gigamon and NetOptics offer in-line network taps that can combine many network connections into a single output or split a single input into many outputs, either replicating all frames across all output ports or slicing the output stream based on data in the frame like addresses and port numbers. These taps work well but are expensive and require that they sit in-line with the monitored link. The security customer connected taps and switch span ports to an IBM G8264 OpenFlow switch, ran the traffic though a deep packet inspection engine and then forwarded the flows to one or more analysis tools. The monitoring is much more flexible than a fixed tap.
More vendors are hopping on the OpenFlow bandwagon, including networking giants Cisco and HP. Juniper Networks added OpenFlow to its Junos SDK in 2011, while OpenFlow controller vendor Big Switch introduced an open source OpenFlow controller early this year. We will continue to see interesting use cases of OpenFlow in production environments.

TECHCENTER: PUBLIC CLOUD Thales and Infoblox Address Weak DNSSEC Demand

Information systems and communications security vendor Thales has integrated its nShield hardware security module (HSM) with the Infoblox DNS platform to provide customers with simple deployment of Domain Name System Security Extensions (DNSSEC), a security protocol designed to protect the Internet from attacks like cache poisoning.
Adoption of DNSSEC within the enterprise has been slow, and according to Cricket Liu, VP of architecture at Infoblox, enterprises have run out of excuses to adopt the technology. The threats DNSSEC protects enterprises from are very real and getting worse. Liu says now is the time for enterprises to start deploying DNSSEC, which is where Infoblox and the Thales nShield integration can help.
"The threat of cache poisoning is very real. We've seen cache poisoning attacks out on the Internet. The consequences are very serious," Liu says. Cache poisoning (also known as DNS poisoning) is a form of attack that corrupts a domain's DNS and replaces it with another DNS, pointing potential victims to a site that looks very much like the one they're trying to reach but that has malicious ends in mind.
DNSSEC has been gathering momentum fast, but it's on such a small base that adoption is still almost non-existent. According to the sixth annual survey of the DNS infrastructure, adoption soared 340% last year. However, the number of zones that have been DNSSEC-signed is only 0.02%, and almost a quarter of them, 23%, failed validation due to expired signatures.
For a long time, businesses of all sizes have been waiting for top-level zones and root zones to deploy DNSSEC. Since the technology works only with a top-down deployment approach (starting with top-level domains such as .com, .net and .org), there was no sense in an enterprise deploying it except for internal use, says Richard Moulds, VP of product management and strategy at Thales e-Security.
"Virtually all of the top-level domains have stepped up to use DNSSEC," Moulds says.
DNSSEC has moved down the stack and is now starting to see early adoption by ISPs. ISP Comcast announced the completion of its DNSSEC deployment in early January. As the largest ISP in the United States, its adoption of DNSSEC sets a precedent that others are sure to follow, Liu says. He compares Comcast's adoption of DNSSEC to GoDaddy's full deployment of IPv6 in 2010, which caused the adoption rate of Ipv6 to explode from 1.5% to 25% of the market in a single year.
Uptake in the enterprise has been incremental so far, and some businesses (particularly those with websites that process financial transactions and those that fall under various regulatory and compliance requirements) are starting to take notice of DNSSEC. Depending on the type of business and the function of the individual enterprise's website, interest in DNSSEC can be high or low.


TECHCENTER: NEXT GEN NETWORK F5 Networks 'Fixes' Data Center Security

Arguing that multiple point appliances intended to secure a network only add to complexity without providing the intended protection, F5 Networks is introducing what it calls a Data Center Firewall to combine multiple security solutions into one appliance. The appliance, called BIG-IP model 11050 and carrying a starting price of $129,995, delivers such security features as dynamic threat defense, DDoS protection, protocol security, SSL termination and a network firewall.
"The current environment just doesn't scale, it doesn't extend and it doesn't respond. We think this model is broken and it's very, very real in our customer base today," said Mark Vondemkamp, director of product management for F5.
ICSA Labs, an industry accreditation body for network firewall solution, certified the F5 BIG-IP product family as a secure socket layer (SSL), transport layer security (TLS) and virtual private network (VPN) compliant appliance line.
The appliance is designed to respond to some of the latest types of attacks on networks, Vondemkamp said, such as dedicated denial of service (DDoS) attacks where web sites are pinged millions of times to bring them down. Lately this has been done for political reasons such as the attacks on sites targeted in the wake of the WikiLeaks document dumps of U.S. State Department cables in 2011.
F5 has also seen a rise in the number of blended threats on the Internet, combining a DDoS attack with an application-level attack. Lastly, the BIG-IP appliance protects against zero day attacks, in which a vulnerability in a software program, such as Microsoft or Adobe, is discovered before a patch for it can be developed and deployed.
The array of point solutions to address these threats -- network firewalls, DDoS appliances, domain name server (DNS) appliances, web application firewalls and load balancers -- are difficult to manage, can be a drag on network performance, and can result in multiple points of failure, said Vandemkamp.
"The traditional approach needs to be replaced by a unified security architecture," he said.
F5, in the leaders quadrant in the Gartner research "Magic Quadrant" analysis of SSL and VPN security vendors, released in December 2011, shares top spot with Cisco Systems and Juniper Networks, while competitor Citrix Systems is identified as a viable "challenger."
However, in its analysis of vendors, Gartner faults F5 for lacking an Internet Protocol Security (IPsec) capability in its products. IPsec is a protocol for securing IP communications by authenticating and encrypting each IP packet in a communications session.

BLOG: Alas Poor Virtensys, I Knew Virtual I/O Horatio

I must admit I was one of those folks that were intrigued by the idea of I/O virtualization. I led sessions at conferences exploring the various ways one could connect their servers and peripherals to each other. The very idea that I could share expensive resources like RAID controllers and network connections from a shared pool seemed like a path to the flexibility I always wanted. Apparently most of you disagreed as at least one I/O virtualization pioneer, Virtensys, bit the dust this week. I don't think Virtual IO will ever go mainstream, so I am sticking with 10Gbps Ethernet and iSCSI.
Of course the whole thing brought back the early days of the LAN industry when we installed ARCnet and Cheapernet LANs so users could share expensive peripherals like hard disks and LASER printers. The I/O virtualization vendors from Apruis to Xsigo all promised to give us access to peripherals from Ethernet and Fiber Channel ports to RAID controllers, and of course their storage, and GPUs while sharing the cost across multiple servers.
These vendors were trying to bring the promise of the PCI SIG's I/O virtualization standards to market. The PCI SIG developed standards for how multiple processes, or even multiple servers, could share resources on I/O cards. SR-IOV, the standard for sharing resources between multiple processes on a single server, has gotten lukewarm reception in the industry with important players like VMware still not fully supporting it. MR-IOV, which allows multiple servers to share I/O cards, never took off as the I/O card vendors realized supporting MR-IOV could mean selling fewer cards.
Virtensys, Aprius and NextIO all worked on building a solution that would let users put any PCIe I/O card they like into their I/O concentrators. Virtensys and NextIO used low-cost ($200) PCI extension cards to connect to their concentrators, where Apruis moved to 10gbps Ethernet for the server-concentrator connection which was neat but raised the cost of each connection.
The last I/O virtualization vendor Xsigo kept their focus on what most customers actually needed, scalable, manageable 10Gbps Ethernet and Fibre Channel connectivity at the right price. While it may be cool to share a RAID controller and allocate it's logical drives to a group of servers, SAN technology does that and allows multiple servers to access the same volume at the same time to support clustering and VMotion.
By using 40Gbps InfiniBand and/or 10Gbps Ethernet for the connections to their I/O Director, Xsigo can put IB or Ethernet switches between the I/O Director and the servers. One I/O Director can support 250 servers and a cluster four IO Directors can support 1,000 servers. That's a significant number of servers over the 16 servers Virtensys could support with a single system. NextIO similarly concentrated on just making IOV work at rack scale.
Virtensys was founded in 2006 as a spinoff from Xyratex and burned through around $40 million in venture funds over its short life. In October Virtensys and Micron announced plans to share Micron SSDs over the Virtensys systems. Last week Micron picked up the assets, primarily intellectual property, and staff of Virtensys. While details of the deal are being kept secret, word on the street is that the purchase price was more on the order of a sack of magic beans than the $160 million the VCs would have considered a win.
Rumors also indicate that Aprius has been absorbed by Fusion-IO for a song. I tried contacting the folks I've worked with at Virtensys and Aprius but have gotten no response.
While losing 2 or 4 players isn't good for the remaining players, there is a market for their gear at Telcos, hosting providers and other organizations that run large, highly virtualized environments with a high rate of change. Hopefully Micron will come up with a PCIe SSD sharing system. Till then it's 10Gbps Ethernet and iSCSI for me.
Disclosure: I've followed all the companies mentioned here for a few years. I'm sure drinks, meals and promotional tchotchkes were involved, but that is the extent of business I have done with them.

SLIDESHOW: Microsoft System Center 2012 Revealed

Microsoft System Center 2012 Revealed

January 19, 2012 Microsoft's System Center 2012, which we discussed in Microsoft's System Center 2012: Building A Private Cloud, is the latest attempt by a big vendor to bring private cloud to the masses. While there are many improvements to System Center, building a private cloud using anyone's software is far from easy. At Microsoft's private cloud reviewers' workshop, we got a peek at the sausage factory. There are a lot of components to configure, but Microsoft has done a good job of streamlining many of the processes.
System Center 2012 can do bare-metal provisioning using IPMI. Relying heavily on templates through System Center 2012, you define the skeleton options--such as MAC address, networking and storage--which are resolved either at runtime, such as an IP address via DHCP, or are taken from a template like a host name. What is interesting is that System Center can discover server hardware and make it available. Inside Virtual Machine Manager, we defined our new hardware host and applied it to a server. You can readily track the progress of the deployment.

Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed   Microsoft System Center 2012 Revealed  
Copyright © 2015 DOTNET All Right Reserved