The Sci/Tech Guide (Aug 2017)

In preparing for this assessment in the next 3-5 years in communications industry, it is obvious that may of the predictions pretty much are deeply technical; one needs a high degree in engineering and a lot of technical jargons though we use every day in order to follow any conversation.

Given the industry is going through its massive digital transformation and the world is going through moving from analog to digital economy, I wanted to list some external factors that could drive communications industry over the next few years and what forces one need to consider to be able to flawlessly execute the transformation and fully disrupt the communications industry with so many verticals within the next 3 years (2020 time-frame).

Below, is the list of my top 10 major disruptors that will drive the most change in communications industry and all its verticals by 2020:

1. Mobility: The Great Wireless Migration

Global growth of mobile connectivity is far outpacing wireline connectivity. This makes sense, as most growth is occurring in the developing world and amongst poorer populations. Such consumers may not even own a home, let alone a FiOS connection or Uverse/Direct TV/Dish networks connection and in many cases, they use Intelsat 5 with Free to Air content using simple 1 meter dish and satellite box costing between $50-100 bucks installed and no recurring charges using Free to Air (FTA) capabilities in Latin America, Africa and some other continents as well with contents that are truly international using KU band.

For these people, mobile is of course is cheaper, more convenient, and more useful, even when landline connectivity is an option. Currently, in UK, the number of wireless numbers is far greater than number of wireline numbers and this trend will continue for many years to come in almost every country in every continent across the globe. There are applications such as iPlum that can be downloaded via App store and one can only pay $1 per month for a landline that can even port the desired number for each home and transfer the call to smartphones and then pay at rate of 1 cent per minute domestically and far cheaper than current Service Provider for international services. This includes pretty much all-distance calls that Service Providers charge as much as $50-60 bucks including taxes, FCC subscriber line and all other junks on consumer bills every month versus paying like $5-8 per month maximum and even for international calls, one can use many type of carriers charging far less than what current providers charge consumers and it is truly a cash cow for them but not for long term if people start using services such as iPlum and other and save tons of expense not only per year but over long period of time. Thus, cord cutting is going to end up to be Service Provider worst enemy and nightmare.

2. Cyber-Security: The Network is the Threat and Prune to Attacks

As operators of the difference networks, Service Providers (SPs) play a pivotal role in fighting the new threats that are emerging and customers will begin to expect, then demand, more proactive protection from the entire internet value chain, and carriers will be expected to support these expectations with a range of technical and operational innovations. The desire for greater, yet far more accurate security may be a great revenue stream for SPs, if they embrace the need.

Currently, there are 1200+ companies in cyber security space and though some are useful and corporations are using them but there is in fact no silver bullet given all the hacking going on including latest and most dangerous ones recently at CIA and most likely for many other corporations worldwide. The thought of Target, SONY, Neiman Marcus, Social Security are still in people’s mind and make all of us nervous since our privacy is at stake here but that does not mean much to a hacker in the path of using the data to gain cash like credit card hacking and it has happened even to me several times forcing credit card companies to change the card and financial institutions have excellent set of analytics to identify these hacking so quickly sometimes I am amazed on how fast they operate but their proactive and predictive platform for fraud is first class, though some transactions go through without being caught but I do believe the percentage of bad transaction is less than 0.3% which is far better than 2-3% about 5 years ago.

One technique that has proven to be effective to fight hackers is via quantum cryptography, which uses quantum mechanics, instead of digital algorithms, to encrypt data. The data will then be forever immune from hackers or malware makers; the only users who will see it will be you and whoever you are sending it to or sharing it with. Quantum computing turns to the electron’s smaller nuclear cousin, the quantum, to transmit message data. That eliminates the need for the traditional 0-1 linear sequence; instead a quantum bit can be both a zero and a one at the same time. That not only exponentially speeds up the transmission process; it means interrupting the linear process. The opening for traditional hacking techniques vanishes in an uncertain haze. Is a bit a zero or one? Only its programmer, and receiver, knows for sure giving massive protection against your data in any enterprise corporations or even for consumers.

3. The Content Disruption Contest:

Being connected via IP continue to become cheaper and cheaper ($/Mbps), adhering rather slavishly to Moore’s Law of diminishing returns. The cost of providing such a service keeps falling, and competition means that the price keeps getting smaller and smaller in a strong, negative feedback loop.

Connectivity is capturing an ever-smaller proportion of the information value chain, while content, service, and product deliverers capture ever-more. By 2020, I believe the ultimate model for the content providers will be to eliminate middle distributor go directly to customers, bypassing current providers similar to Netflix model going directly to customers via OTT play which has proven very successful globally in almost every bit of content including premium content. In addition, even if premium content such as sports can be streamed on-demand using OTT model and one pays for use, it would be far better model versus today’s cable and telco model especially in an un-regulated industry.

4. IoE (Internet of Everything): The IP Data Traffic Explosion

The next major trend that will impact telecommunications is the explosion of connected devices which is expected to be around 50B devices by 2020 time-frame. This internet of everything and will add billions if not trillions of new connected data sources globally by 2020.

The upswing of all of these devices will be an astronomical growth in data volumes; we will quickly push through exabyte volumes and enter the world of zettabytes and finally Yottabytes.

As current size of cloud and its services are estimated around 10 Zettabytes and as nature and architecture of cloud is changing and more services such as Video including 4K, 8K and even 16K contents for OTT services continue to move into the cloud, we will easily be moving into Yottabytes which no one expected this almost a decade ago but we are not far off from Yottabyte world in near future not only in the cloud but also in core of many SPs network within the next 5-10 years.

I had also predicted that by 2030, there will be 1-10 trillion of sensors if not more which can be used in almost every vertical, with major one being healthcare and now driver-less cars are also using sensors with advanced mathematical algorithms and perform analytics to develop knowledge to improve the business with smart sensors.

In healthcare verticals with major innovations in Nano-based technologies, one can design a simple sensor connected to the body shown in chart above and use it to measure blood pressure, Oxygen levels, sugar levels, and many other human health parameters and using wireless to feed the data to smartphone or even send the info to the doctor directly in real time and one can also use virtual reality or augmented reality to create virtual reality medical concept and if we use sensors on the brain with many trillion neurons, I believe even Cognitive Reality™ can be used as a major game changer and disruptor to help treat patients which was not practical to do so a decade ago.

5. The Search for Growth – Network Based Saturation

As SPs retire different networks such as 2G, 2.5G, 2.75G and circuit switched network, boomers will enter retirement communities and assisted living facilities which are fully digitized in order to be as efficient as possible. Older Americans will be forced into using these technologies by the world around them and will likely consume vastly more bandwidth than they, or their carriers, ever imagined.

As this occurs, the last remaining percentages of market penetration will be achieved, and the market which in many countries is already saturated both vertically and horizontally, of course each with different velocity. Therefore, a lot of innovations are needed such as iPlum I stated above in order to either bring more revenue or substitute current revenue with different type of revenue except with much better margin and EBITDA. A simple example is in UAE, where the penetration is about 140% versus US penetration of wireless at over 92-95% already. That means in UAE, for example, for every individual, they have 1.4 numbers (Average) compared to US which is less than 1 at the current time. The flawless execution of both SDN and NFV will significantly improve simplicity and increase revenue in every aspects of verticals in enterprise customer base.

6. Skynet Finally Gets Real:

I’m predicting that Skynet 2.0 is about to re-appear. These space balloon or drone-based systems (UAVs) will provide high-quality broadband access to anywhere and everywhere in the world, they’ll do it affordably, and they’ll likely start arriving around 2020-2025 time-frame. And this time, they’ll be wildly successful and cover more people on the planet accessing to Internet without using expensive access providers such as either cable or current Telecoms and pretty much control the access and make a big game changer in very near future.

Overall, access has always been the highest level of costs, least amount of competition and best ability to differentiate yourself and key to networking is distribution and access by far is major game changer that could tip the scale big time and superb game changer worldwide where companies will win big time within the next 3-5 years if not sooner.

7. Wireless Sensor Networks:

A wireless sensor network (WSN) is a wireless network consisting of spatially distributed autonomous devices using sensors to monitor physical or environmental conditions. A WSN system incorporates a gateway that provides wireless connectivity back to the wired world and distributed nodes.

The potential applications include engineers who have created WSN applications for areas including health care, utilities, and remote monitoring. In health care, wireless devices make less invasive patient monitoring and health care possible. For utilities, such as the electricity grid, streetlights, and water municipals.

Wireless sensors offer a lower-cost method for collecting system health data to better manage resources using big data analytics. Remote monitoring covers a wide range of applications where wireless systems can complement a true virtual reality for both doctors and patients. Remote monitoring applications include environmental monitoring of air, water and soil along with structural monitoring for buildings and bridges, industrial machine monitoring and asset tracking. Wireless technology offers several advantages for those who can build wireless systems and take advantage of the best technology for the specific application.

8. Cloud Based Network & Services:

Both AWS and Microsoft are headed towards a server-less architecture for the cloud based networks. An evolution of the web services architecture. What is interesting is the app no longer cares about Infrastructure (or IaaS in Cloud). The app is really a set of API connected microservices, each microservice receives data from an API, does something with it, and returns information via an API. Think of it as the next generation of a Cloud PaaS. If an App needs a set of common services such as logging, or database as a service, they are exposed as a set of available microservices and APIs. Event driven microservices. Here are 2 major cloud based services model: one based on AMAZON and one based on Microsoft which both use server-less model with no VMs and no containers. (A Paper published in 2015 by AMAZON to this topic).

The multi-tier pattern provides good guidelines for you to follow to ensure decoupled and scalable application components that can be separately managed and maintained. Multi-tired applications are often built using a service oriented architecture (SOA) approach to using web services. In this approach, the network acts as the bounday between tires. However, there are many undifferentiated aspects of creating a new web service of creating a new web service tier as part of your application code written with a multi-tier web application which is direct result of the pattern itself.

Examples include code that integrates one tier to another, code that defines an API and a data model that the tiers use to understand each other, and security related code that ensures that tiers’ integaration points are not exposed in an undesired way. Amazon API Gateway, a service for creating and managing APIs, and AWS Lambda2, a service for running arbitrary code functions, can be used together to simplify the creation of robust multi-tier applications. Amazon API Gateway’s integration with AWS Lambda enables user defined code functions to be triggered directly via a user-defined HTTS request.

Regardless of the request volume required, both the API Gateway and Lambda will scale automatically to support exactly the needs of your application. When combined, you can create a tier for your application that allows you to write the code that matters to your application and not focus on various other undifferentiated aspects of implementing a multi-tiered architecture – Like architecting for high availability, writing client SDKs, on various other undifferentiated aspects of implementing a multi-tiered server/operating system (OS) management, scaling and implementing a client authorization mechanism.

9. Neural, Machine Learning and AI Based Networks:

Artificial intelligence (AI), and neural networks represent incredibly exciting and powerful machine learning-based techniques used to solve many real-world problems. While human-like deductive reasoning, inference, and decision-making by a computer is still a long time away, there have been remarkable gains in the application of AI and associated algorithms. The concepts discussed here are extremely technical, complex, and based on sophisticated mathematics, statistics, probability theory, physics, signal processing, machine learning, computer science, psychology, linguistics, and neuroscience.

Increasingly, we rely on these techniques and machine learning to solve complex problems for us, without requiring explicit programming instructions. IBM Watson is prime example. The human brain is exceptionally complex and quite literally the most powerful computing machine known. The inner-workings of the human brain are often modeled around the concept of neurons and the networks of neurons known as biological neural networks with over 100B neuron network.

At a very high level, neurons interact and communicate with one another through an interface consisting of axon terminals that are connected to dendrites across a gap (synapse). In plain English, a single neuron will pass a message to another neuron across this interface if the sum of weighted input signals from one or more neurons (summation) into it is great enough (exceeds a threshold) to cause the message transmission. This is called activation when the threshold is exceeded and the message is passed along to the next neuron.

Artificial Neural Networks (ANNs) are statistical models directly inspired by, and partially modeled on biological neural networks. They are capable of modeling and processing nonlinear relationships between inputs and outputs in parallel. The related algorithms are part of the broader field of machine learning, and can be used in many applications as discussed.

10. Deep Learning Based Networks:

Deep learning, while sounding flashy, is really just a term to describe certain types of neural networks and related algorithms that consume often very raw input data. They process this data through many layers of nonlinear transformations of the input data in order to calculate a target output. Unsupervised feature extraction is also an area where deep learning excels. Feature extraction is when an algorithm is able to automatically derive or construct meaningful features of the data to be used for further learning, generalization, and understanding. The burden is traditionally on the data scientist or programmer to carry out the feature extraction process in most other machine learning approaches, along with feature selection and engineering.

Deep learning has been used successfully in many applications, and is considered to be one of the most cutting-edge machine learning and AI techniques at the time of this writing. The associated algorithms are often used for supervised, unsupervised, and semi-supervised learning problems. Deep learning algorithms rely more on optimal model selection and optimization through model tuning. They are more well suited to solve problems where prior knowledge of features is less desired or necessary, and where labeled data is unavailable or not required for the primary use case. In addition to statistical techniques, deep learning leverage concepts and techniques from signal processing as well, including nonlinear processing and/or transformations. IBM Watson is prime example of deep learning as well.

Conclusion:

In conclusion, it is clear and evident that entire communications industry is going through massive digital transformation and in order to deliver set of flawless services to both consumer and business enterprise customers, we need to move away from using too many technical jargons and use a small set of disruptors and help drive the future of communications and its business model such as OTT for every segment of verticals.

If any company starts late in its digital transformation, they will never be able to compete in IP domain and my strong recommendation is to find a way to speed up with vendor’s deployment of SDN/NFV/IoT/5G and big data since the cat is out of the bag and the value any vendor can provide becomes significantly more critical for communications providers.

It is also important to state that AI is an extremely powerful and exciting field. It’s only going to become more important, relevant and ubiquitous moving forward, and will certainly continue to have very significant impacts in modern society. ANNs and the more complex deep learning technique are some of the most capable AI tools for solving very complex problems, and will continue to be developed and leveraged in the future such as IBM Watson which is being used to solve many of the daily issues we face using deep learning models with huge success and exciting to watch.

Dr. Eslambolchi

 

**********

 

Bing one-ups Google, lets you image search within images
Did you spot the perfect outfit in a web search? Bing may be able to find it for you.

Google may still be the dominant search engine, but the competition just added a pretty amazing feature: Microsoft’s Bing will now let you search for images within images — and even buy items you find there.

Say you’re watching an episode of your favorite TV show, and you see a supercute outfit or a rather handsome watch or maybe a piece of furniture you fancy.

Search Bing for a screenshot, tap the magnifying glass button at the upper left-hand corner, isolate the item you’re interested in, and there’s a decent chance Bing will be able to find it thanks to its machine learning technology. (You can read more about the tech here.)

I spent a few minutes testing the new Visual Search feature today, and the results aren’t always as specific as you might like: While Bing had no trouble picking a specific Pebble smartwatch out of a lineup of men’s wristwear or finding a specific baby outfit, it gave me a whole bunch of boring black automobiles when I was searching for a specific truck. You may also find that Tracer, the star of Blizzard’s popular Overwatch game, will return a wide variety of images of “anthropomorphic anime cyborgs.”

[​IMG]
Bing Visual Search can identify bowls you might like to buy, based on a picture of a nice kitchen where such a bowl lives.

Microsoft
It also couldn’t find one of Jennifer Aniston’s sweaters, which is an in-joke for people who’ve been following this whole idea of “buying things you see on a screen.” And this is probably a good time to mention Pinterest introduced a feature like this a couple years ago.

Still, it’s a pretty cool feature for Bing, and the nice thing about machine learning algorithms is they typically get better with use. I’m definitely going to play around with it more, even if I’m not ready to abandon Google.

https://www.cnet.com/news/bing-visual-search-within-images/

 

**********

 

http://www.alexa.com/topsites/countries/US

[​IMG]

**********

 

100+ New iOS 11 Features

***********

T-Mobile Pays to Keep PBS on the Air in Rural Areas

T-Mobile will pay for stations to relocate to new broadcasting frequencies so they can continue operating and T-Mobile can use the spectrum to boost its coverage map.
  • OTA TV Antenna

All that wireless spectrum that T-Mobile bought in the Federal Communication Commission’s auction earlier this year means there is little left over for the public television stations that currently broadcast on it.

But T-Mobile, which seems to compulsively give away free stuff as its main marketing strategy, will now pay for some of those stations to relocate to new broadcasting frequencies so they can continue operating. The gesture will mean that 38 million Americans in rural areas will continue to receive low-power broadcasts from their local stations, PBS announced on Thursday.

T-Mobile spent nearly $8 billion in the auction, which closed in April, scoring 45 percent of the mostly unused wireless spectrum the FCC wanted to repurpose. Although Comcast and other large broadcasting companies owned much of the spectrum, some of it is still occupied by low-power TV stations broadcasting in rural areas.

The auction didn’t include a plan to provide funding for those stations to move to other frequencies, a lengthy process the FCC refers to as “repacking.” So T-Mobile will foot the bill, providing funding to cover equipment, engineering, installation, and legal fees, according to Current. The industry group America’s Public Television Stations (APTS) helped negotiate the deal, along with PBS.

“As the post-auction repacking process moves forward, local public television stations are committed to ensuring that all Americans continue to have free over-the-air access to the local content and services on which our viewers and their communities depend,” APTS President and CEO Patrick Butler said in a statement.

People who currently watch the affected TV stations shouldn’t notice any service interruptions during the repacking process, according to PBS.

“We’re proud to collaborate with broadcasters across the country as they transition to other channels, and doubly proud to support local public television’s public service mission and help ensure millions of kids in rural America continue to have access to public television’s high-quality, educational programming,” Neville Ray, chief technology officer of T-Mobile, said in a statement.

T-Mobile in turn gets to take advantage of the spectrum it purchased and, ideally, provide better service in areas where it has struggled with coverage.
**********

That huge iceberg should freak you out. Here’s why

Updated 1:03 AM ET, Sat July 15, 2017

(CNN)This week, a trillion-tonne hunk of ice broke off Antarctica.

You probably know that. It was all over the Internet.
Among the details that have been repeated ad nauseam: The iceberg is nearly the size of Delaware, which prompted some fun musing on Twitter about where exactly Delaware is and how anyone is supposed to approximate the square footage of that US state. The ice, which has been named A68, represents more than 12% of the Larsen C ice shelf, a sliver on the Antarctic Peninsula. And most important: None of this has anything to do with man-made climate change.
The problem: That last detail — the climate one — is misleading at best.
At worst, it’s wrong.
Some scientists think this has a lot to do with global warming.
I spent most of Thursday on the phone with scientists, talking to them about the huge iceberg off Antarctica and what it means.
Here are my five takeaways.

1. This doesn’t NOT look like climate change.

There is no disagreement among climate scientists about whether humans are warming the Earth by burning fossil fuels and polluting the atmosphere with greenhouse gases. We are. And we see the consequences.
But there is some dispute about whether there is enough evidence to tie the breakoff of this particular piece of ice to global warming.
In a widely quoted statement, Martin O’Leary, a Swansea University glaciologist who was part of the team studying Larsen C, said that the iceberg calving was “a natural event” and that “we’re not aware of any link to human-induced climate change.”
Not everyone agrees with that assessment, however.

At 6,000 square kilometers, the Larsen C ice sheet could be one of the world’s biggest ever icebergs.

Source: European Space Agency

“They’re looking at it through a microscope” rather than seeing macro trends, including the fact that oceans around Antarctica are warming, helping thin the ice, said Kevin Trenberth, a distinguished senior scientist at the US National Center for Atmospheric Research.
“To me, it’s an unequivocal signature of the impact of climate change on Larsen C,” said Eric Rignot, a glaciologist at NASA’s Jet Propulsion Laboratory and the University of California, Irvine. “This is not a natural cycle. This is the response of the system to a warmer climate from the top and from the bottom. Nothing else can cause this.”
Rignot said colleagues who say otherwise are burying their heads “in the ice.”

2. That said, this s*** is complicated.

The difference in opinion stems, in part, from a perceived lack of data. Compared with other parts of the world, Antarctica is cold, weird, remote and hard to study. Some scientists say they don’t have the super-long-term datasets they would need in order to prove that man-made warming affected this particular ice sheet.
Conversely, they can’t disprove global warming’s contribution, either.
“I myself don’t see clear evidence convincing me that this is climate change-related,” said Christopher Shuman, a research scientist at NASA’s Goddard Space Flight Center and the University of Maryland, Baltimore County. “I think we need to wait and see. We need to watch carefully and wait for the signs.”
If the Larsen C ice shelf continues to collapse, he said, we’ll know that climate change had something to do with this week’s events. If not, his theory will be confirmed, meaning the iceberg is part of a natural cycle of calving and regeneration.
Nearby ice sheets with similar names — Larsen A and B — broke down for reasons that are related to climate change, Shuman said. The cause of Larsen C’s break this week is less clear, he said, because it’s winter in the Antarctic now; there was no evidence of meltwater on the surface of the iceberg, and there aren’t enough data about temperature trends in that area, both in the water and in the air.
Rignot, the scientist in California, said the fact that this collapse happened during the dead of winter in the Southern Hemisphere makes it all the more remarkable.
He sees the broader collapse of the Larsen C ice shelf as inevitable.
The question for him is when: likely 10, 20, maybe 30 years, he said.

3. Climate change certainly is reshaping Antarctica.

Further complicating things, climate change is reshaping Earth’s southernmost continent in a variety of ways. Air temperatures have risen over the Antarctic Peninsula, where Larsen C is located, said David Vaughan, director of science at the British Antarctic Survey, but not as much in other locations.
Ocean temperatures are up worldwide too, including near Antarctica, he said. “The amount of heat that’s going into the ocean is really huge,” Trenberth said.
That’s helping thaw some of the ice from above and below. Ice that touches the water is thought to be particularly vulnerable to the effects of global warming. But, again, Antarctica is big and cold and weird. It’s not melting as quickly as ice in the Arctic, where temperatures are rising about twice as fast as the global average.
It’s clear that some ice sheets in Antarctica are melting because of climate change and that some glaciers are getting thinner, researchers told me. The question is whether there are sufficient data to say that this particular iceberg was broken off because of greenhouse gas pollution.

“Do I believe that’s a climate change impact? Probably. Not 100%,” said Vaughan, of the British Antarctic Survey. “But the reason the warm water comes onto the continental shelf at all is because of the winds that blow across the southern ocean, and we believe the winds have gotten more intense because of climate change.
“This is not straightforward,” he added. “It’s simply not as straightforward as ‘the temperature rises and the ice melts.’ It’s a whole chain of quite subtle and very consecutive processes that cause the ice to melt and the sea levels to rise.”
So, again, it’s complicated.
But step back from the microscope: Greenhouse gas pollution is contributing to warming and melting.

4. The type of ice you’re talking about matters.

To better understand the ice in Antarctica, imagine a cocktail glass with ice and liquid in it. What happens to the level of the drink as the ice melts?
“When my ice melts, the cocktail doesn’t overflow,” said Rignot, the glaciologist. The surface stays about level.
This type of ice — the floating-in-water kind — is what broke off into the ocean this week in Antarctica. Scientists call it an “ice shelf.” As opposed to a glaciers or ice sheets, which are found on land, floating ice shelves don’t raise global sea levels appreciably when they break off into the ocean and melt.
Again, think of the cocktail glass.
But — and this is an important “but” — those floating ice shelves often have a “buttressing effect” on inland ice masses that will raise global sea levels if they melt, said Trenberth, the climate scientist at the National Center for Atmospheric Research. As that fringe of ice goes, it can help to start destabilizing the middle.
So while scientists are right to say the Larsen C iceberg won’t change global sea levels in a literal and immediate sense, the big picture may be less rosy and simplistic.
“Floating ice doesn’t change sea level at all,” Trenberth said, “but the consequences of this may well end up increasing sea level quite substantially in the long term.”
Already, global sea levels are up about 8 centimeters since 1992, largely because of global warming, he said. And they’re rising at a rate of 3.4 millimeters per year.

5. But it’s clear that melting Antarctic ice is a multigenerational time bomb.

Those numbers probably sound small. They are small. Who cares about a few millimeters of difference in the tides?
But this reveals yet another failing in the way most of us are talking about the melting of the world’s ice: We must force ourselves to think on bigger timescales.
We should think about New York, Miami, Shanghai and other coastal cities threatened by rising seas in our lifetimes. I can’t write about this issue without remembering my trip to the Marshall Islands, a tiny island nation in the Pacific that stands to lose all of its territory — and language, culture and history — if temperatures rise just 2 degrees Celsius, which seems increasingly likely.
But we must also consider future generations. What kind of world are we dumping on them?
When you think in terms of decades or centuries, the most vulnerable part of the West Antarctic Ice Sheet could raise global sea levels as much as 3 meters, Rignot told me. And the larger East Antarctic Ice Sheet, in the very long term, could lift tides 19 meters, he said. That’s more than 62 feet, or a building that’s at least six stories tall.
“The actions we’re taking now actually have consequences 50 years from now, 100 years from now — and 200 years from now,” Trenberth said.
When we see trillion-ton ice chunks falling off a continent, that’s well worth remembering.
**********

An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. The researchers shut the system down as it prompted concerns we could lose control of AI.

The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI “agents.”

Negotiating in a new language

As Fast Co. Design reports, Facebook’s researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.

READ MORE: Nintendo doing better than in years, but it has one huge problem

In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying “I can i i everything else,” to which Alice responded “balls have zero to me to me to me…” The rest of the conversation was formed from variations of these sentences.

While it appears to be nonsense, the repetition of phrases like “i” and “to me” reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob’s later statements, such as “i i can i i i everything else,” indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like “I’ll have three and you have everything else.”

English lacks a “reward”

The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a “reward” principle where they expect following a sudden course of action to give them a “benefit.” In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.

“Agents will drift off from understandable language and invent code-words for themselves,” Fast Co. Design reports Facebook AI researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

AI developers at other companies have observed a similar use of “shorthands” to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.

AI language translates human ones

In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google’s team. Its researchers found the AI hadsilently written its own language that’s tailored specifically to the task of translating sentences.

READ NEXT:Facebook close to building chat bots with true negotiation skills

If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.

They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.

http://www.digitaljournal.com/tech-…s-a-language-humans-can-t-read/article/498142

 

**********

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *