Jun 10, 2014
Telenor to debut cheap SIM cards in September
Telenor will introduce its telecom service in September by selling SIM cards for Ks 1,500 (about US$1.60) apiece, company said in a press release.
It will not be necessary to register in advance to buy a Telenor SIM card. Customers can buy as many SIM cards as they like as long as they can show a National Registration Card, Telenor staff said.
Internet speed on the 3G network will depend on how many users are surfing in a particular area, but the lowest speed should range from 700 Kbps to 1 Mbps at peak user time and the highest from 4-6 Mbps, Telenor staff said.
The range in the the 2G network could be as low as 100-150 Kbps during peak hours in some locations. Telenor said it plans to offer a 4G service if there are enough handsets on the market to support one.
Telenor plans to construct 8,000 telecom towers across the country, but is facing a massive hurdle to install the first 1,000 stations by September. After the first thousand are installed it plans to add 300 to 400 per month.
The speed of mobile internet connections occasionally exceed 1 Mbps on the 3G network, but are usually about 100 Kbps. Internet connections via a smart phone are either slow or nonexistent, users say.
Telenor said it plans to open 200 SIM cards sales centres and will also sell the cards through 100,000 SIM retail shops nationwide. Published on Friday, 06 June 2014 16:40
Specialize in telecom, Datacenter, & inter-networking supply
here’s original Article
Jun 6, 2014
Artificial Intelligence: A New Frontier in Data Center Innovation
Google made headlines when it revealed that it is using machine learning to optimize its data center performance. But the search giant isn’t the first company to harness artificial intelligence to fine-tune its server infrastructure. In fact, Google’s effort is only the latest in a series of initiatives to create an electronic “data center brain” that can analyze IT infrastructure.
Automation has always been a priority for data center managers, and has become more important as facilities have become more complex. The DevOps movement seeks to “automate all the things” in a data center, while the push for greater efficiency has driven the development of smarter cooling systems.
Where is this all headed? Don’t worry. The data center won’t be a portal to Skynet anytime soon. Data center managers love technology, but they don’t totally trust it.
“You still need humans to make good judgments about these things,” said Joe Kava, vice president for data centers at Google. “I still want our engineers to review the recommendations.”
Kava said last week that Google has begun using a neural network to analyze the oceans of data it collects about its server farms and to recommend ways to improve them. Kava said the use of machine learning will allow Google to reach new frontiers in efficiency in its data centers, moving beyond what its engineers can see and analyze.
While there have been modest efforts to create unmanned “lights out” data centers, these are typically facilities being managed through remote monitoring, with humans rather than machines making the decisions. Meanwhile, Google and other companies developing machine learning tools for the data center say the endgame is using artificial intelligence to help design better data centers, not to replace the humans running them.
Romonet: predictive TCO modeling
One company that has welcomed the attention around Google’s announcement is Romonet, the UK-based maker of data center management tools. In 2010 the company introduced Prognose, a software program that uses machine learning to build predictive models for data center operations.
Romonet focuses on modeling the total cost of ownership (TCO) of operating the entire data center, rather than a single metric such as PUE (Power Usage Effectiveness), which is where Google is targeting its efforts. The company says its predictive model is calibrated to 97 percent accuracy across a year of operations.
Google’s approach is “a clever way (albeit a source-data-intensive one) of basically doing what we are doing,” Romonet CEO and co-founder Zahl Limbuwala wrote in a blog post. “Joe’s presentation could have been one of ours. They’ve put their method into the public domain but not their actual software – so if you want what they’ve got you need to build it yourself. Thus they just shone a light on us that we couldn’t have done ourselves.”
Romonet’s modeling software allows businesses to accurately predict and manage financial risk within their data center or cloud computing environment. Its tools can work from design and engineering documents for a data center to build a simulation of how the facility will operate. Working from engineering documents allows Romonet to provide a detailed operational analysis without the need for thermal sensors, airflow monitoring or any agents – which also allows it to analyze a working facility without impacting its operations.
These types of models can be used to run design simulations, allowing companies to conduct virtual test-drives of new designs and understand how they will impact the facility.
“I can envision using this during the data center design cycle,” said Google’s Kava. “You can use it as a forward-looking tool to test design changes and innovations.” BY RICH MILLER ON JUNE 6, 2014
Original Article here
The best products and services in Data center, telecom, and inter-networking supply
Labels:
Data Center
What are the various types of xDSL?
There are several forms of xDSL, each designed around specific goals
and needs of the marketplace. Some forms of xDSL are proprietary,
some are simply theoretical models and some are widely used
standards. They may best be categorized within the modulation
methods used to encode data. Below is a brief summary of some of the
known types of xDSL technologies.
ADSL
Asymmetric Digital Subscriber Line (ADSL) is the most popular form
of xDSL technology. The key to ADSL is that the upstream and
downstream bandwidth is asymmetric, or uneven. In practice, the
bandwidth from the provider to the user (downstream) will be the
higher speed path. This is in part due to the limitation of the
telephone cabling system and the desire to accommodate the typical
Internet usage pattern where the majority of data is being sent to
the user (programs, graphics, sounds and video) with minimal upload
capacity required (keystrokes and mouse clicks). Downstream speeds
typically range from 768 Kb/s to 9 Mb/s Upstream speeds typically
range from 64Kb/s to 1.5Mb/s.
ADSL Lite (see G.lite)
CDSL
Consumer Digital Subscriber Line (CDSL) is a proprietary technology
trademarked by Rockwell International.
CiDSL
Globespan's proprietary, splitterless Consumer-installable Digital
Subscriber Line (CiDSL).
EtherLoop
EtherLoop is currently a proprietary technology from Nortel, short
for Ethernet Local Loop. EtherLoop uses the advanced signal
modulation techniques of DSL and combines them with the half-duplex
"burst" packet nature of Ethernet. EtherLoop modems will only
generate hi-frequency signals when there is something to send. The
rest of the time, they will use only a low-frequency (ISDN-speed)
management signal. EtherLoop can measure the ambient noise between
packets. This will allow the ability to avoid interference on a
packet-by-packet basis by shifting frequencies as necessary. Since
EtherLoop will be half-duplex; it is capable of generating the same
bandwidth rate in either the upstream or downstream direction, but
not simultaneously. Nortel is initially planning for speeds
ranging between 1.5Mb/s and 10Mb/s depending on line quality and
distance limitations.
G.lite
A lower data rate version of Asymmetric Digital Subscriber Line
(ADSL) was been proposed as an extension to ANSI standard T1.413 by
the UAWG (Universal ADSL Working Group) led by Microsoft, Intel,
and Compaq. This is known as G.992.2 in the ITU standards
committee. It uses the same modulation scheme as ADSL (DMT), but
eliminates the POTS splitter at the customer premises. As a
result, the ADSL signal is carried over all of the house wiring
which results in lower available bandwidth due to greater noise
impairments. Often a misnomer, this technology is not splitterless
per se. Instead of requiring a splitter at customer premises, the
splitting of the signal is done at the local CO.
G.shdsl
G.shdsl is an ITU standard which offers a rich set of features (e.g.
rate adaptive) and offers greater reach than many current
standards. G.shdsl also allows for the negotiation of a number of
framing protocols including ATM, T1, E1, ISDN and IP. G.shdsl is
touted as being able to replace T1, E1, HDSL, SDSL HDSL2, ISDN and
IDSL technologies.
HDSL
High Bit-rate Digital Subscriber Line (HDSL) is generally used as a
substitute for T1/E1. HDSL is becoming popular as a way to provide
full-duplex symmetric data communication at rates up to 1.544 Mb/s
(2.048 Mb/s in Europe) over moderate distances via conventional
telephone twisted-pair wires. Traditional T1 (E1 in Europe)
requires repeaters every 6000 ft. to boost the signal strength.
HDSL has a longer range than T1/E1 without the use of repeaters to
allow transmission over distances up to 12,000 feet. It uses pulse
amplitude modulation (PAM) on a 4-wire loop.
HDSL2
High Bit-rate Digital Subscriber Line 2 was designed to transport T1
signaling at 1.544 Mb/s over a single copper pair. HDSL2 uses
overlapped phase Trellis-code interlocked spectrum (OPTIS).
IDSL
ISDN based DSL developed originally by Ascend Communications. IDSL
uses 2B1Q line coding and typically supports data transfer rates of
128 Kb/s. Many end users have had to suffice with IDSL service
when full speed ADSL was not available in their area. This
technology is similar to ISDN, but uses the full bandwidth of two
64 Kb/s bearer channels plus one 16 Kb/s delta channel.
MDSL
Usually this stands for multi-rate Digital Subscriber Line (MDSL).
It depends on the context of the acronym as to its meaning. It is
either a proprietary scheme for SDSL or simply a generic
alternative to the more common ADSL name. In the former case, you
may see the acronym MSDSL. There is also another proprietary scheme
which stands for medium-bit-rate DSL. Confused yet?
RADSL
Rate Adaptive Digital Subscriber Line (RADSL) is any rate adaptive
xDSL modem, but may specifically refer to a proprietary modulation
standard designed by Globespan Semiconductor. It uses carrierless
amplitude and phase modulation (CAP). T1.413 standard DMT modems
are also technically RADSL, but generally not referred to as such.
The uplink rate depends on the downlink rate, which is a function
of line conditions and signal to noise ratio (SNR).
SDSL
Symmetric Digital Subscriber Line (SDSL) is a 2-wire implementation
of HDSL. Supports T1/E1 on a single pair to a distance of
11,000 ft. The name has become more generic over time to refer to
symmetric service at a variety of rates over a single loop.
UDSL
Universal DSL. See G.lite.
VDSL
Very High Bit-rate Digital Subscriber Line (VDSL) is proposed for
shorter local loops, perhaps up to 3000 ft. Data rates exceed 10
Mb/s.
Original Article here
The best products and services in telecom and internet-working supply
Sprint-CEO-Hesse-Contradicts-Masayoshi-Son-on-Fixed-LTE-Plans
To try and sell regulators on a Sprint takeover of T-Mobile, SoftBank boss and Sprint Chairman Masayoshi Son has been insisting that the deal would allow Sprint to join the fixed-LTE broadband space, bringing additional competition to the home broadband market. This strategy appears to be news for Sprint CEO Dan Hesse, who stated this week that offering a fixed LTE service is nowhere on Sprint's horizon. When outlets pointed out the contradictory positions of Son and Hesse, the company's PR department stated:
quote:
"Dan was speaking to Sprint's short-term focus--completing our 3G and voice network rip and replace, rolling out our 4G LTE network, launching Sprint Spark, expanding the Framily platform and growing EBITDA--and how they fit with our spectrum and other assets/resources," Sprint spokesman Scott Sloat told FierceWireless. "Masa's remarks have been in the context of his long-term vision."
In other words, like I noted in April, Sprint has its hands full just running a decent LTE network right now, and the promise of significant fixed LTE competition is just regulator bait.by Karl Bode 08:23AM Friday Jun 06 2014
original article here
The best products and services in telecom and internet-working supply
Apr 28, 2014
Google Using Machine Learning to Boost Data Center Efficiency
Google is using machine learning and artificial intelligence to wring even more efficiency out of its mighty data centers.
In a presentation today at Data Centers Europe 2014, Google’s Joe Kava said the company has begun using a neural network to analyze the oceans of data it collects about its server farms and to recommend ways to improve them. Kava is the Internet giant’s vice president of data centers.
In effect, Google has built a computer that knows more about its data centers than even the company’s engineers. The humans remain in charge, but Kava said the use of neural networks will allow Google to reach new frontiers in efficiency in its server farms, moving beyond what its engineers can see and analyze.
Google already operates some of the most efficient data centers on earth. Using artificial intelligence will allow Google to peer into the future and model how its data centers will perform in thousands of scenarios.
In early usage, the neural network has been able to predict Google’s Power Usage Effectiveness with 99.6 percent accuracy. Its recommendations have led to efficiency gains that appear small, but can lead to major cost savings when applied across a data center housing tens of thousands of servers.
Why turn to machine learning and neural networks? The primary reason is the growing complexity of data centers, a challenge for Google, which uses sensors to collect hundreds of millions of data points about its infrastructure and its energy use.
“In a dynamic environment like a data center, it can be difficult for humans to see how all of the variables interact with each other,” said Kava. “We’ve been at this (data center optimization) for a long time. All of the obvious best practices have already been implemented, and you really have to look beyond that.”
Enter Google’s ‘Boy Genius’
Google’s neural network was created by Jim Gao, an engineer whose colleagues have given him the nickname “Boy Genius” for his prowess analyzing large datasets. Gao had been doing cooling analysis using computational fluid dynamics, which uses monitoring data to create a 3D model of airflow within a server room.
Gao thought it was possible to create a model that tracks a broader set of variables, including IT load, weather conditions, and the operations of the cooling towers, water pumps and heat exchangers that keep Google’s servers cool.
Original Article here
The best products and services in Data center, telecom, and inter-networking supply
Labels:
Data Center
Feb 3, 2014
Telenor, Ooredoo get Myanmar licenses
Dylan Bushell-Embling | February 04, 2014
The Myanmar government has finally awarded nationwide telecom licenses to Telenor and Ooredoo, seven months after the two companies won the tender to build mobile networks in the nation.
Myanmar lawmakers have after a long delay settled on a regulatory framework for the newly liberalized telecom sector, clearing the path for Telenor and Ooredoo to roll out services in the nation.
In a statement, Telenor Asia head Sigve Brekke said the license terms are “a product of an extensive consultation process with the Government of Myanmar and international organisations.
“It now represents an acceptable framework that we believe will go a long way to provide the necessary long-term predictability that Telenor requires when it formally starts operations in Myanmar.”
Both licenses include mobile spectrum in both the 900-MHz and 2100-MHz bands, and are valid for 15 years.
The licenses were awarded late last week, and formally come into effect tomorrow. After this time, the clock will start on the two operators' rollout commitments.
Telenor has pledged to achieve geographic coverage of 83% of the nation for voice and 78% for data within five years of the license taking effect. Ooredoo has meanwhile committed to delivering geographic coverage of 84% for both voice and data after five years.
Dylan Bushell-Embling | February 04, 2014
We are the number one supply! Specialize in data communication accessories
Original Article here
Jan 13, 2014
Modular Data Centers: Adoption, Competition Heat Up in 2014
Last week’s Schneider-AST deal highlights the modular data center market, where both adoption and competition are on the rise.
Will 2014 finally be the breakout year for pre-fabricated data centers? The year is young, but the modular market has already seen its first major M&A deal, and may soon have its first IPO.
With marquee customers in the hyperscale market, and slow but steady progress with enterprise customers, modular designs continue to gain traction. New players and new designs are emerging, further advancing the potential for pre-fab deployments.
But barriers remain. The ISO container casts a long shadow over the modular data center market. Executives in the sector say it will take additional education, as well as more public customer success stories, before the new breed of modular designs can overcome customer resistance dating to the early days of the “data center in a box.”
M&A and IPOs
On Friday, Schneider Electric announced that it had acquired AST Modular, a Barcelona-based modular specialist that has built a global business. The deal reflected the growing importance of pre-fabricated designs and Schneider’s ambitions in the modular sector.
The market for modular data centers is also becoming more competitive, with U.K. specialist Bladeroom entering the U.S. market and investment firm Fidelity launching its Centercore design as a product. Late in 2013, IDC Architects announced that it is commercializing a modular design it has deployed for global banking customers, while newcomer NextFort opened a “modular colo” facility near Phoenix..
Meanwhile, IO is hoping to become the first modular specialist to go public. The company has announced plans for an initial public offering, but hasn’t yet indicated the date for its IPO. The Phoenix-based provider counts Goldman Sachs among its roster of clients, and is bullish on the outlook for modules as the delivery model for the “software-defined data center.”
“The data center market has spoken, and the consensus is that modular has won,” said Troy Rutman, the spokesman for IO.
Progress, But Also Resistance
Other executives in the modular sector see pre-fabricated designs making their way into the mainstream more gradually, but say that resistance persists.
“You’re deploying a new technology into a mature market that is questioning its delivery,” said Rich Hering, Technical Director Mission Critical Facilities at M+W Group. “Most folks don’t like change.”
“A lot of people believe modular is just for scale-out and low reliability,” said Dave Rotheroe, Distinguished Technologist and Strategist for HP. “It’s not true. Modular designs can and do apply in the enterprise.”
“Customers are just beginning to understand what modular allows them to do,” said Ty Schmitt, an executive director and fellow at Dell Data Center Solutions. “As the customer base matures and the supply chain matures, we’ll see exponential growth.”
Early Adopters
Hyperscale cloud builders Google, Microsoft and eBay were among the first earliest users of modular designs. AOL has deployed “micro-modular” data centers both indoors and outdoors. On the enterprise front, Goldman Sachs and Fidelity have been the marquee names embracing pre-fabricated data centers.
Modular designs aren’t for everyone, but Schmitt says the concept is being proven with a nucleus of forward-thinking customers seeking cheaper and faster ways to deploy their IT infrastructure.
“It’s customers who’ve transformed their business,” said Schmitt. “They’re the early adopters. As more and more customers take advantages of software resiliency, we’ll see more adoption. It’s going to be a series of small hurdles.” BY RICH MILLER ON JANUARY 13, 2014
Original Article here
The best products and services in Data center, telecom, and inter-networking supply
Labels:
Data Center
Subscribe to:
Posts (Atom)