Amazon Web Services

Amazon web Services (AWS) could also be a subsidiary of Amazon that provides on-demand cloud computing platforms to people, companies, and governments, on a metered pay-as-you-go basis.

In combination, these cloud computing internet services provide a gaggle of primitive abstract technical infrastructure and distributed computing building blocks and tools.

one among these services is Amazon Elastic reckon Cloud (EC2), that permits users to possess at their disposal a virtual cluster of computers, accessible all the time, through the internet .

AWS’s version of virtual computers emulates most of the attributes of a true computer, in conjunction with hardware central process units (CPUs) and graphics process units (GPUs) for processing; local/RAM

memory; hard-disk/SSD storage;  alternative of OS ; networking; and pre-loaded application software like web servers, databases, and client relationship management (CRM).

Fees are supported a mixture of usage (known as a “Pay-as-you-go” model).

The hardware, software, software, or networking options chosen by the subscriber needed convenience, redundancy, security, and repair options.

Subscribers pays for one virtual AWS pc, a fanatical physical pc, or clusters of either.

As a half of the subscription agreement, Amazon provides security for subscribers’ systems.

Amazon markets AWS to subscribers as how of getting giant scale computing capability additional quickly ANd cheaply than building an actual physical server farm.

All services are billed supported usage, however every service measures usage in variable ways in which.

As of 2017, AWS owns a dominant thirty fourth of all cloud (IaaS, PaaS) while subsequent three competitors Microsoft, Google, and IBM have Martinmas, 8%, 6 June 1944 respectively

As of 2020, AWS comprises quite 212 services in conjunction with computing, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools, and tools for the web of Things.

the foremost popular embrace Amazon Elastic reckon Cloud (EC2) and Amazon straightforward Storage Service (Amazon S3).

Most services aren’t exposed on to end users, however instead provide practicality through arthropod genus for developers to use in their applications.

Amazon internet Services’ offerings are accessed over HTTP, using the remainder style of architecture and SOAP protocol for older arthropod genus and completely JSON for newer ones.

AWS has distinct operations in twenty two geographical “regions”: one in North America, one in South America, in Europe, one within the Middle-East, one in Africa and eight in different parts of the world.

AWS has proclaimed three new regions which will be coming back on-line.

Each region is completely contained at intervals one country and each one of its information and services keep within the selected region.

every region has multiple “Availability Zones”, which contains one or additional separate information centers, every with redundant power, networking and property, housed in separate facilities.

convenience Zones don’t automatically give further measurability or redundancy within a neighborhood , since they’re intentionally isolated from one another to stop outages from spreading between Zones.

many services will operate across convenience Zones (e.g., S3, DynamoDB) whereas others are often configured to duplicate across Zones to unfold demand and avoid time period from failures.

As of Dec 2014, Amazon internet Services operated a calculable one.4 million servers across twenty eight convenience zones.

In straightforward words AWS permits you to try and to the subsequent things-

1. Running internet and application servers within the cloud to host dynamic websites.

2. firmly store all of your files on the cloud so you’ll access them from anyplace.

3. victimization managed databases like MySQL, PostgreSQL, Oracle or SQL Server to store info.

4. Deliver static and dynamic files quickly round the world employing a Content Delivery Network (CDN).

* AWS customers will alter Spot Instances for AWS Marketplace Amazon Machine Image (AMI) product whereas launching new instances through the EC2 console Launch Instance Wizard (LIW).

With this launch, you’ll reduce prices on the EC2 instances you would like to run your third-party software system on AWS.

* Spot Instances alter you to request unused EC2 instances at steep discounts- up to ninetieth compared to On-Demand prices- so you’ll lower your Amazon EC2 prices.

Spot Instances are an economical choice if have flexibility with running your third-party applications and if your applications are fault-tolerant.

Customers—including Salesforce, Lyft, Zillow, Novartis and Autodesk—use Spot Instances to scale back costs and obtain faster results.

Forr instance , Salesforce saved over eightieth vs On-Demand Instance valuation, and doubled the speed of process machine learning and ETL workloads with Spot Instances.

Electric Vehicle

What are the needs of future transportation?
The future of transportation will thus focus on its decarbonization Cities will promote bicycling and electric mass transit systems at the expense of individual cars; climate change will disrupt current disrupters like Uber, since the proliferation of individual rides is too energy intensive and leads to intolerable. We can resurgence due to technological developments, and an increased focus on renewable energy.

What is an electric vehicle?  

An electric vehicle (EV) is a vehicle that uses one or more electric motors or traction motors for propulsion. An electric vehicle may be powered through a collector system by electricity from off-vehicle sources, or may be self-contained with a battery, solar panels or an electric generator to convert fuel to electricity. EVs include, but are not limited to, road and rail vehicles, surface and underwater vessels, electric aircraft and electric spacecraft.

EVs first came into existence in the mid-19th century, when electricity was among the preferred methods for motor vehicle propulsion, providing a level of comfort and ease of operation that could not be achieved by the gasoline cars of the time. Modern internal combustion engines have been the dominant propulsion method for motor vehicles for almost 100 years, but electric power has remained commonplace in other vehicle types, such as trains and smaller vehicles of all types.

The Hyperloop This future mode of transportation is designed for longer haul transportation between cities, countries or even continents. The principle of the Hyperloop is based on the movement of people in capsules or pods that travel and high speeds though tubes over long distances.

Plug-in Hybrid Electric Vehicles (PHEVs)

Also known as Extended-Range Electric Vehicles (EREVs), this type of EV is powered by both petrol and electricity.

PHEVs can recharge the battery through both regenerative braking and ‘plugging-in’ to an external electrical charging outlet. In EREVs the petrol engine extends the range of the car by also recharging the battery as it gets low.

They are expensive and hard to maintain
In the real world, PHEVs are expensive, their fuel economy on motorway journeys is not very good, they are complex to maintain, have raised concerns about their battery life, and their resale value is uncertain. To many buyers, plug-in hybrids cause anxiety and are considered to be relatively complicated. Charging a PHEV is also time-consuming. Just like smartphones have varying charger connectors, PHEVs have the same problem with charger connectors having style variations making it difficult for other PHEVs to charge on certain stations.

Hybrid Electric Vehicles (HEVs)

HEVs are powered by both petrol and electricity. The electric energy is generated by the car’s own braking system to recharge the battery. This is called ‘regenerative braking’, a process where the electric motor helps to slow the vehicle and uses some of the energy normally converted to heat by the brakes.

HEVs start off using the electric motor, then the petrol engine cuts in as load or speed rises. The two motors are controlled by an internal computer which ensures the best economy for the driving conditions.

Regenerative Braking
An HEV cannot plug in to off-board sources of electricity to charge the battery. Instead, the vehicle uses regenerative braking and the internal combustion engine to charge. The vehicle captures energy normally lost during braking by using the electric motor as a generator and storing the captured energy in the battery.

The Honda Civic Hybrid and Toyota Camry Hybrid are both examples of HEVs.

Battery Electric Vehicles (BEVs)

BEVs are fully electric vehicles, meaning they are only powered by electricity and do not have a petrol engine, fuel tank or exhaust pipe. BEVs are also known as ‘plug-in’ EVs as they use an external electrical charging outlet to charge the battery.

The electric-vehicle market made big gains in 2019, across multiple car manufacturers – and the industry has even bigger plans for the years to come.

Rivian, for example, closed out the year with an extra $1.3 billion in investments. Tesla turned a profit,debuted the Cybertruck, delivered the first Model 3s built in its Shanghai plant, and announced a boosted range on its Model S and Model X. On the luxury end of the spectrum, the Audi E-Tron went up for sale,Porsche started production on the laycan performance car,and Lamborghini announced its first hybrid super car.

Increasingly restrictive emissions and fuel-efficiency regulations around the globe – but not so much in the US – are compelling carmakers to roll out vehicles more able to fit within those restrictions. Accordingly, in recent years, manufacturers have advertised a whirlwind of plans and timelines for bringing more EVs to market.

INTRODUCTION TO ANOVA FOR DATA SCIENCE(With COVID-19 case study using python)

By: Sneka. P

 

We are grappling with a pandemic that’s operating at a never-before-seen scale. Researchers all over the globe and frantically trying to develop a vaccine or a cure for COVID-19 while doctors are just keeping the pandemic from overwhelming the entire world.

So let’s consider a situation where doctors have four medical treatments to apply to cure the patients. Once we have the test results, one approach is to assume that the treatment which took the least time to cure the patients is the best among them. But what if some of these patients had been partially cured already, or if any other medication was already working on them?

In this article, let me initiate the ANOVA test and its different types that are being used to make better decisions. So I’ll demonstrate each type of ANOVA test in python to visualize how they work on covid-19 data. So let’s get going

WHAT IS ANOVA TEST?

An Analysis of Variance test, or ANOVA, can be thought of as a generalization of the t-test for more than 2 groups. The independent t-test is used to differentiate means of a situation between two groups. ANOVA is used when we want to compare the means of a condition between more than two groups.

ANOVA tests if there is a difference in the mean somewhere in the model, but it does not tell us where the difference is (if there is one). To find where the difference among the groups, we have to conduct the post-hoc tests.

To perform any tests, we first need to define the null and alternate hypothesis:

  • NULL HYPOTHESIS: There is no significant difference between the groups.
  • ALTERNATE HYPOTHESIS: There is significant difference between the groups. 

Basically, ANOVA is performed by comparing two types of variation, the variation between the sample means, as well as the variation within each of the samples. The below mentioned formula represents one-way Anova test.  The formula is given below.

Assumptions of an ANOVA Test

There are certain assumptions we need to make before performing ANOVA:

  1. The observation are obtained independently and randomly from the population defined by the factor levels.
  2. The data for each factor level is normally distributed.
  3. Independence of cases: the sample cases should be independent of each other.
  4. Homogeneity of variance: Homogeneity means that the variance among the groups should be approximately equal.

The premise of homogeneity of variance can be tested using tests such as Levene’s test or the brown-Forsythe test. Normality of the distribution of the scores can be tested using histograms, the values of skewness and kurtosis, or using tests such as Shapiro or q-q- plot. 

The assumption of independence can be determined from the design of the study. It is important to note that ANOVA is not robust to violations to the assumptions of independence. This is to say that even if you violate the premise of homogeneity or normality you can perform the test and basically trust findings.

However, the results of ANOVA are invalid if the indecency assumption is violated. In general, with violation of homogeneity, the analysis is considered robust if you have equal-sized groups. With violations of normality, continuing with ANOVA is generally ok if you have a large sample size.

Types of ANOVA Tests:

  1. One-way ANOVA:  It has just one independent variable.
  • For e.g : differences in corona cases can be assessed by country, and a country can have 2, 20 or more different categories to compare.
  1. Two-way ANOVA: A two- way anova(also called factorial ANOVA) refers to an ANOVA using two independent variables
  • For e.g: old age group may have higher corona cases overall compared to the Young Age group, but this difference could be greater (or less) in Asian countries compared to European countries.
  1. N-way ANOVA:   A researcher can also use more than two independent variables, and this is an n-way ANOV (with n being the number of independent variables) 
  • For e.g: potential differences in corona cases can be examined by country, gender, age, group, ethnicity etc.

WITH REPLICATION (VS) WITHOUT REPLICATION

  1. Two-way ANOVA with replication: Two groups and the members of those groups are doing more than one thing 
  • For e.g: let’s say a vaccine has not been developed for covid-19, and doctors are trying two different treatments to cure two groups of covid-19 infected patients
  1. Two-way ANOVA without replication: Its used when you only have one group and you are double-testing that same group 
  • For e.g: let’s say a vaccine has been developed for covid-19, and researchers are testing one set of volunteers before and after they have been vaccinated to see if it works or not.
  1. POST-ANOVA Test
  • When we conduct an ANOVA, we are attempting to determine if there is a statically significant difference between the groups. If we find that there is a difference, we will then need to examine where the group differences lay.

So this is the end I have tried to explain the ANOVA test using a relevant case study in these pandemic times. It was fun experience putting this all together for our community!

5G WIRELESS TECHNOLOGY

5G is the fifth generation network in mobiles. It started with an intention to deliver higher multi – Gbps peak data speeds, ultra low latency, more reliability, large network capacity, increased availability and a uniform user.

It began deploying worldwide in 2019. It was successor to the 4G networks which are the connectivity provided for most current mobiles.

Just like its predecessors, 5G networks are also cellular, and the service area is divided into small geographical areas called cells. All 5g wireless devices in a cell are connected to Internet and telephone network in the form of radio waves through a local antenna in cells.

5G uses a system of cell site that divide their territory into sectors and send encoded data through radio waves. Every cell site must be connected to a network backbone which can be wired or wireless backhaul connection.

The biggest advantage of 5G is the presence of greater bandwidth which give higher download speeds eventually up to 10 gigabits per second (Gbits/s)

The increased bandwidth will not just serve cell phones like existing cellular networks but it can also be used as general internet source for laptops, desktop computers, competing with existing ISPs such as cable internet, and also will make new applications in Io T and M2M areas possible.

The increased speed is achieved partly by using higher-frequency radio waves than current cellular networks. But higher frequency waves have a shorter range than frequencies used by cell towers. So in order to ensure a wide reach of service, 5G uses operates on up to three frequency bands which are, low, medium and high.

5G is composed of 3 different types of cells where each requires different antennas, each type giving a different trade off of download speed vs distance and service area. The 5G phones and devices connect to the network through antenna that has highest speed within range of their location.

South Korea, China, and the United States are the countries that are leading the world in building and deploying 5G technology. Telecommunications operators around the world including AT&T Inc., KT Corp, and China Mobile, have been racing to build the fifth-generation (5G) of wireless technology.

2020 has been set as the target by the Central Government of India for the commercial launch of 5g wireless technology. A three – year program in March 2018 was launched by the government to promote research in 5G. Also, Ericsson has created a 5G test base at IIT Delhi for developing applications that are tailor-made for the Indian scenario.

Some of the benefits of 5G wireless technology are:

  • 5G will lead to acceptance of virtual reality or augmented reality or any such new technologies common. It will also advance our smartphones better with more uniform data rates , lower latency and cost –per- bit.
  • 5G will have the convenience of ultra-reliable, low latency links that will lead industries to invest in more projects which require remote control of critical infrastructure in various fields like medicine, aviation, etc.

The economic impact of 5G wireless technology will be huge and vivid. Technology is known for its effects on various domains of the world where the largest impact falls on the economy of the world. Like all other technologies, 5G will also have an impact on economy that cannot be ignored.

This impact is much greater than previous network generations. The development requirements of the new 5G network are also expanding beyond the traditional mobile networking players to industries such as the automotive industry.

 In a landmark study that was conducted on the 5G Economy by Qualcomm, it emerged that the full economic effect of 5G Wireless Technology would appear around 2035 in a broad range of industries which would produce up to 12.3 dollars trillion worth of goods and services that were directly enabled by 5G. It also emerged that the 5G Wireless Technology could potentially generate up to 3.5 dollars trillion in revenue in 2035 and also directly support up to 22million jobs. Also surprising is the fact that over time, the total contribution of 5G to the Global GDP growth could be as much as the contribution of India.

With every new generation of data network, there are always advantages and disadvantages to discuss. The question is how the network intends on capitalizing on its advantages while shielding its users from its deficiencies that will ultimately determine its success.

Internet in space : Bringing Wi-Fi to the Earth orbit, the Moon and Mars

If space has always been an enigma for mankind, then the moon has always served as the first post for any attempt at understanding or exploring deeper space. All ventures into outer space, ranging from exploratory fly-bys to managed flights, have first been tried out of the moon . Space research is also moving into larger-scale simulations using powerful infact given the high cost, and often the impracticablity of conducting live experiments, space research had moved into computer-based simulation long before most other streams.

Everything from flight paths of futer rockets to the theories on the origin of the universe and its evolution are today computer simulated. Consider the case of the magnetic field around a planet. let take the moon as our example the moon’s magnetic field is very feeble when compared to that of the earth. Also unlike on the earth, it varies widely from point to point. This much is known from the measurements taken by spacecraft that flew by or landed on the moon.

Internet in Mars

The Internet is slated to go over and above this world, the first target being Mars, followed by Jupiter and then moon. This idea of taking the Internet to the space comes from the need for a low cost high reliability inter planetary network. It is not that there was no communication earlier. When countries started sending probes into the space, each used a unique set of protocols to communicate with the earth. This was done using the deep space network (DSN) developed by NASA. Since these probes have communicated with same ground stations, the need for common protocol increased with time. Taking the internet to space is the offshoot of this need for standardization. The inter planetary network (IPN), a part of jet Propulsion Laboratory (JPL), is managing this program.

But how will this be implemented ?

One can plan how the internet will work on the earth because Of its fixed size and the fixed positions on which the data has to travel. Now, for the implementation on the planets will be connected through individual dedicated gateways. The individual networks can follow their own protocols, but these protocols will end at the gateway. By keeping the internet of all the planets separate engineers will not have to make long service calls. Besides they will not have to send a database of 20-million dotcom names to mars periodically

These gateways will work on a bundle based protocol, which will reside over the transport layer to carry data from one gateway to another. This gateway may not be on the surface of a planetary body, it can be a spacecraft in orbit too. At the moment a bundle protocol will be needed because the data will need to travel huge distances, and sending small packets of data may not be feasible. Instead, this data will be collected and sent in a bundle, as a big burst of data, to the next gateway.

Future Communication Solutions for Space

The new space renaissance witnessed around the globe in recent years has spurred innovations in telecommunications in space. Both private entities and government agencies are exploring news ways to communicate with space assets:

ATLAS’S FIRST COMMERCIAL DEEP SPACE NETWORK & LASER COMMUNICATION

Private company ATLAS operates a satellite ground network for communication services in the UHF, S, X and Ka-band frequencies. ATLAS has developed a proprietary cloud platform that enables low latency machine-to-machine (M2M) communications using the REST web standard. The company is currently working with NASA to develop a portable ground station network using an internet-managed antenna system developed by ATLAS.

ATLAS is also developing a deep space communication relay network. The Interplanetary Satellite Communications Network (ISCN) will likely become the first commercially-available deep space network, though the company hasn’t held a technology demonstration for the network yet.

Solstar’s Wi-Fi in Space

Solstar is a private startup that’s working to develop in-flight wireless connectivity for future suborbital and orbital flights. The company has developed the Schmitt Space Communicator, a small, sturdy router designed to withstand extreme conditions of spaceflight in order to provide wireless internet connectivity aboard a rocket.

The Future of Internet in Space

The future is bright for the networking technology supporting space exploration. The LCRD mission, for example, is slated to launch in 2019, which will further test laser communications between ground stations and spacecraft. Astrobotic’s first mission to the Moon will see laser communication data rates of 1Gbps to the earth using ATLAS’s laser technology. NASA and MIT researchers have developed laser based long distance internet . The team made history by transmitting data over the 384,633 kilometers between the moon and the earth at a download rate of 622 megabits speed which is quite amusing so far all the people struggling to get a good internet speed this would be helpful.

Moisture Sensor

Our ancestors gave more importance to agriculture and it was basically following all the traditional methods their forefathers followed. As the population increased the need for production was given more importance and the pressure on agriculture increased as the years followed. Green revolution in 1960’s lead to use of various methods which includes good quality seeds, better irrigation techniques, fertilizers and so on. Even then our country’s population never stopped decreasing so this is where the technological advancements started appearing in agriculture and what we see in many parts of the world is called modern agriculture. This incorporates scientific data and technology to increase the yield of the crop. This is considered to be a milestone in agriculture and some of them are monitoring irrigation methods via smartphones, ultrasounds for livestock, crop sensors, robots, drones and most of these technologies fall under ‘precision agriculture’. Some of the advantages of using agricultural technology is more crop yield, saving inputs like fertilizers, water, pesticides. It also reduces impact on natural ecosystems and reduced physical work for farmers.

Now let’s look into a specific technology called soil water sensors, by the name itself we can understand that it is some way related to water, since it is an Agricultural Tech we can say that it is related to detect the amount of water in soil. This Tech have been used by farmers for many years and the usage had been increasing as saving water is the need of the hour. It gauges the volumetric content of water in the soil. Water content can be measured by drying, eliminating and sample weighing but these sensors find the volumetric content by some other factors like dielectric constant, electrical resistance and so on. This sensor is compatible and can be easily embedded with Arduino UNO, Arduino mega 2560, Arduino ADK and many other various kinds of micro-controller.

Soil Moisture Sensor Circuit – Analog Mode

They have two probes which passes the current through the soil and gets the resistance value. If there is more water in the soil it conducts more electricity and will have less resistance and exactly opposite if there is less moisture in soil. Sensor can be conducted in two modes which are digital and analog mode. It contains a potentiometer which will have a threshold value and this value will be compared by the LM393 comparator. The output LED will light up based on the threshold value. It has three pines and thee one with 5 it means signal – means GND which is ground and + means SV supply. Sensor comes with a small PCB board with LM393 comparator chip, a potentiometer, output signal pins with both analog and digital and input power pins are also present on the PCB.

In analog mode, the analog output of the sensor is used in this method and the output value will mostly range from 0 – 1023. Moisture will be measured by percentage that is from 0 – 100 and these values will be observed in serial monitor. Different ranges can also be set for the moisture values and the water pump can be turned on and off accordingly.

In digital mode, digital output of the sensor will be connected to the digital pin of the micro-controller. Module of the sensor will have a potentiometer which will set the threshold value. Then this value is compared with output value of the sensor using the LM393 comparator which is present on the sensor module. This LM393 comparator plays a main role by comparing the threshold value and the sensor output value. If the sensor value is more than the threshold value, then the digital pin will show 5V and LED on the sensor will light up which means the moisture content is more in soil and if the sensor value is less than the threshold value, then the digital pin will show 0 V and the light will go down.

Some of the advantages of sensor are as follows; Irrigation is the key factor in agriculture and many of the farmers are not so sure about the enough amount of water required and is available in soil which will lead him to wither irrigate more which is over irrigation or irrigate less which will lead to death of the plant. This kind of sensors make automation of farming easier. It consumes less power and works at 5 V and even works on low current less than 20 mA and weighs only 3 grams.
Also have got few disadvantages like they cover only a small sensing area, depth of the deception is only 37 mm and it can’t be used in many parts of India because it works only in range of 10 C- 30 C. In many cases it was found to be less accurate.

Common problems faced by farmers mostly is sensor failure, lack of timely data, excessive labor requirements, wiring issues, lack of timely data and failure of data telemetry.

Data Science: The Future

What is data science? 

Data science, in simple words, is the study of data. Mainly, it deals with the developing methods of recording, storing, and analyzing data to extract useful information effectively. The vision or long-term goal of data science is to gain insights and knowledge from any type of data — both structured and unstructured. 

In data science, one deals with both structured and unstructured data. The algorithms also involve predictive analytics in them. Thus, data science is all about the present and future. That is, finding out the trends based on historical data can be useful for immediate decisions and ways to find the patterns which can be modeled and can be used for future predictions to see what things may look like in the future accurately. 

Why choose data science? 

Data Science has turned out to be a necessity for companies due to the amount of data generated and the evolution in the field of Analytics. To make most of their data, companies from all domains may be Finance, Marketing, Retail, IT, or Bank. All are looking for Data Scientists. This growth has led to a massive demand for Data Scientists all over the globe. With the kind of salary that a company has to offer and IBM is declaring it as a trending job of the 21st century, it is lucrative. This field is such that anyone from any background can make a career as a Data Scientist. 

What is seen in Data Science? 

Machine Learning: Machine Learning is the way to learn how to visualize the data, which involves algorithms and mathematical models, chiefly employed to make machines learn and prepare them to adapt to everyday advancements. These 

models can also help to find the behavior and helps to predict the future. 

Big Data: Humans are producing too much data in the form of clicks, 

orders, videos, images, comments, articles, RSS Feeds. These data are generally unstructured, and it is often named as Big Data. Big Data tools and techniques mainly help in converting this unstructured data into a structured form. 

Skill sets required: 

Python coding: Python is majorly preferred to implement mathematical models and concepts as it has libraries/packages to build and deploy models. R-programming language can also be used as an alternative

MS Excel: 

Microsoft Excel is considered an essential requirement for all data entry jobs. It is of great use in data analysis, applying formulae, equations, diagrams out of a messy lot of data. 

Hadoop Platform: 

It is an open-source distributed processing framework. It is used for managing the processing and storage of big data applications. 

SQL database/coding: 

It is mainly used for the preparation and extraction of datasets. It can also be used to solve problems like Graph and Network Analysis, Search behavior, fraud detection. 

Technology: 

Since there is so much unstructured data out there, one should know how to access it. It can be done in a variety of ways, via APIs, or web servers. 

Techniques 

• Mathematical Expertise: Data scientists also work on machine learning algorithms such as regression, clustering, time series. which require a very high amount of mathematical knowledge since they are based on mathematical algorithms. 

• Working with unstructured data: Since most of the data produced every day, in the form of images, comments, tweets, search history, is disorganized. It is a handy skill in today’s market to know how to convert this unstructured into a structured form and then working with them. 

Career Opportunity/option : 

In a world where 2.5 quintillion bytes of data is generated every day, a professional who can organize this humongous data to provide business solutions is indeed the hero! Much has been spoken about why Big Data is here to stay. Building on what’s already been written and said, let’s discuss Data Science career opportunities and why ‘Data Scientist’ is the decent and passable job title of the 21st century. 

According to the Harvard Business Review, “it is one of the high-ranking professionals with the training and curiosity to make discoveries in the world of AI and Big data.” Therefore, it is no surprise that Data Scientists are coveted professionals in the Big Data Analytics and IT industry. With experts predicting that 40 ZB of data will be in existence by the year 2020, Data Science career opportunities will only shoot through the roof and regarded as the best. There is a shortage of skilled professionals in a world, and again, increasingly turning to data for decision making. This has also led to the enormous demand for Data Scientists in start-ups and well-established companies. A McKinsey Global 

Institute study states that “by 2018, the US alone should encounter a shortage of about 190,000 professionals with great analytical skills. With the Big Data wave showing no signs of slowing down, there’s a rush among global companies to hire Data Scientists to manage their business-critical Big Data”. 

So, we can conclude that there is broad and giant scope in data science and machine learning; we can regard it as our future.

The Plastic Solution

The notorious degradation-proof plastic can now be broken down!
It was in 2016 that a group of Japanese researchers discovered a bacteria
strain that had naturally evolved to degrade polyethylene terephthalate
(common plastic known as PET or polyester.)
Later while studying the same bacteria, scientists from the the University of
Portsmouth in the U.K. and the U.S. Department of Energy’s National
Renewable Energy Laboratory, made a breakthrough, when they found out
that a particular enzyme in the bacteria was capable of breaking down plastic
completely in a few days.
(The researchers findings were published in Proceedings of National Academy
of Sciences.)
While investigating the enzyme’s structure a tweak was made to it which
ramped up its ability to degrade PET, also giving it the ability to degrade an
alternative form of PET known as PEF.
Imagine the group of researchers led by professor John McGeehan who only
wanted to tweak the enzyme to find out its origin but ended up finding a
substance that could efficiently breakdown plastic!
National Geographic reported in 2015 that eight million tons of plastic make it
into the ocean annually. Not only is plastic detrimental to wildlife, but also to
seabirds and other marine creatures who mistake it for food.
So, removing plastics from the environment could actually be a great thing, but
can the enzyme help is the question.
Simply breaking down large pieces of plastic into smaller pieces is not useful in
itself– rather it creates microplastics that can cause damage to marine
environments – the bacteria however can make plastic recycling far more
effective. Even then it is a long way to go for it to be usable in recycling
industries.
Although a small thing now, it proves that there is more room for development
in the field, and the future shows a promising solution for the garbage piling
around the world.
The author believes that everyone; not just scientists should perform their
roles to create a better future for the next gen. Of utmost priority should be
the objective that plastic doesn’t make it to the water bodies.
Less usage of it in our daily lives or simply saying NO to it, would really be a
giant step towards cleaning the world.

NEW ERA OF OPTO-TECHNOLOGY: OPTICAL COMPUTERS

World is becoming advanced day by day and because of that we want such things which makes our work more easier in a better way and with less money and less time consuming.
For this,Technology plays a vital role to achieve this.
Opto-technology is one of the such kind of technology which plays a crucial role in building advanced world with high-tech technology..
Now the question arises what is opto-technology?
–if we split the word Opto into its meaning,it means light or photons…
yaa,photons-these are small packets of light,that everybody knows but I have wrote this article so that we can know what we doesn’t know about them and their specialities.
So,let us discuss their hidden facts and hidden properties:-
.light has its own energy and light travels in small packets form and these small packets transform their energy to another packet and in this way a chain of small packets is created which we call them a beam of light and which travels with the speed that everybody knows.
These photons have their own energy which can be converted into another form in a easier manner with less loss of energy,,that’s why in todays generation Opto technology is demanding.
OPTICAL LIGHT TECHNOLOGY is used in many fields such as in medical sciences,IT sector,automation,Artificial-intelligence and IOT.
IN this article we would mainly discuss on the OPTICAL COMPUTERS.so lets discuss them…
As the name suggests an optical computer is a device that uses photons rather than electric current to perform digital operations in it.
In todays world digitalization of everything is implemented,but digitalization can be implemented in a smarter way..
Smarter way means—LESS INPUT WITH MORE OUTPUT..
AS, WE USES ELECTRICITY FOR EVERYTHING TO BECOME DIGITALIZED WHICH IS VERY COSTLY AND NOT ECO-FRIENDLY.
In,our planet earth if we try to find out that everything on this planet has its own alternate ways,then we can easily solve big and advanced problems of this planet in easier manner..
Like,alternate way of producing electricity can be photons or infrared radiations.

WORKING:–
An optical computer can be constructed with the use of infrared poles(DIODES) that absorbs photons.Also,we need semi-conductor techniques to built infrared CPU AND ITS ATTRIBUTES…We need three things to build and to work it:–
1.OPTICAL PROCESSOR..
2.OPTICAL DATA TRNSFER..
3.OPTICAL STORAGE..
These can be achieved by building the Photonic chips which stored advanced data securely by occupying very little space and thus makes our work easier.
Optical computer are much faster than the normal computers as they are using photons to get starts which travels faster then the speed of current and thus we can do our work more effectively and easily.
Also,Optical computers are smaller in size as all the processors and internal devices are set-up inside the semi-conductor chips which uses photons as energy to get started..
The light can be delayed by passing it through an optical fibre of a certain length and also it can be split into multiple (sub)rays.These rays further passes through the chamber of diode which stores the absorbed photons and then these photons are converted into highly energetic photons with the help of small magnetic motors and these energetic photons when travels with high speed,they produces energy which is used by our optical computer to perform its digital and logical operations..

Also,this can also be achieved by concluding this below process:–

When an atom of the laser crystal is irradiated by the light from the pump source, it absorbs a photon and is excited. When a photon passes by an atom, the atom emits a photon witch has the same attributes(direction frequency and phase) as the passer-by photon. The atom backs to the ground state,and the new photon has a different polarization energy and this energy is further used to perform digital operations..

OPTICAL Technology is little bit similar to laser technology as both uses beam of photons to implement their work.
SO,AS DISCUSSED ABOVE LASER CAN also be the alternate way of optical technology..

Photons have no electrical charge, they are immune to electromagnetic interference.Thus, the combination of global interconnection between the photons and the chips, high level of parallelism in the processing, and potentially very high speed of photons could lead to very great computational power.

Thus,in the conclusion we can say that Technology and humans both are like the two phases of coin.
Like in the coin probability of finding two phases is equal,In the same way probability of finding new technology by humans is equally shared.. Thus, we should always try to find the new technologies in this new era so that our upcoming future generations would live a happier and prosper life.

THE NEW 1000 TIMES FASTER MEMORY

A new type of Computer chip could allow computers to carry out operations that currently takes hours or days in a matter of minutes, “Imagine if you are a taxi driver but the town where you work is always changing, people are constantly swapping houses and the shops and services are forever disappearing and reappearing in different places. That’s similar to the way in which data is organized in existing chips,” said Peter Marosan, chief technology officer at Blueshift Memory.

“Our design is the equivalent of replacing that with a stable structured town where you already know where everything is and can find it much more quickly. It makes everything faster, easier and more effective.”
The design allows for drastically more efficient modelling of data, meaning it would be useful for weather forecasting and predicting climate change.
What does data Transfer mean?

Also known as data transmission, it is the process of using certain computing techniques and technologies movement of electronic or analog data from one computer node to another. Different standard communication medium formats are utilized to achieve the desired. Data is transferred in the form of bits and bytes over a digital or analog medium, and the process enables digital or analog communications between devices. Transferred data may be of any type, size and nature. Analog data transfer typically sends data in the form of analog signals, while digital data transfer converts data into digital bit streams.

Traditional data transfer speed

Data rates are often measured in megabits (million bits) or megabytes (million bytes) per second. These are usually abbreviated as Mbps and Mbps, respectively. Another term for data transfer rate is throughput. Because there are eight bits in a byte, a sustained data transfer rate of 80 Mbps is only transferring 10MB per second. A hard drive may have a maximum data transfer rate of 480 Mbps, while a local ISP may offer an internet connection with a maximum data transfer rate of only 1.5 Mbps.

What is memory wall?

This theory was initially conceptualized by Wulf and McKee in 1994 and revolves around the idea that computer processing units are rapidly advancing at a fast enough pace to leave the memory (RAM) stagnant. This isn’t going to happen immediately, but considering current trends in CPU and RAM remaining the same, a memory wall could be hit in sometime in the near future.

A survey found that CPU speed increased at an average rate of 55% per year from 1986 to 2000, whereas RAM speed increased by just 10% per year. A similar study, however, showed just a maximum 12.5% annual increase in CPU performance from 2000 to 2014.

According to Moore’s Law, which states that the number of transistors in a circuit doubles every two years, CPUs will eventually become too fast to yield any noticeable difference in computing speed. Once we reach this so-called memory wall, program or application execution time will depend almost entirely on the speed at which RAM can send data to the CPU. So even if one has an incredibly fast processor in his computer, its function may be limited to the RAM speed.

The present day processor chip making companies also noted that certain applications have become less efficient as processors continue to evolve – this is known as the Von Neumann bottleneck effect. Hence delays in signal transmission continue to grow while feature sizes shrink, further stressing the problem of bottleneck.

But there are some solutions available to overcome the problem of memory wall. For example by delivering small amounts of high-speed memory on multiple levels of caching (to store data locally in order to speed up subsequent retrievals), computers can bridge the gap between RAM and CPU speed.

Von Neumann bottleneck effect

The von Neumann bottleneck looks at how to serve a faster CPU by allowing faster memory access. Part of the basis for the von Neumann bottleneck is the von Neumann architecture (here latency is unavoidable), in which a computer stores programming instructions along with actual data, versus a Harvard architecture where these two kinds of memory are stored separately. These types of setups became necessary as simpler, pre-programmed machines gave way to newer computers requiring better ways to control programming and information data.

Memory improvements have mostly been in density – the ability to store more data in less space, rather than transfer rates. As speeds have increased, the processor has spend an increasing amount of time idle, waiting for data to be fetched from memory. No matter how fast a given processor can work, in effect it is limited to the rate of transfer allowed by the bottleneck.
Often, a faster processor just means that it will spend more time idle which can be a major drawback for efficiency levels.
The von Neumann bottleneck has often been considered a problem that can only be overcome through significant changes to computer or processor architectures.

A new technology

A Cambridge-based startup has recently created a prototype for a memory chip that it believes could make data transfers in computers 1,000 times faster.
BLUESHIFT MEMORY says its prototype has a design that “reorganises” the way in which memory chips handle large scale operations where data transfers need to be rapid. The firm said the chip is structured to store data in readiness for these types of operation. Although still a prototype, Blueshift claims its model has already yielded impressive results. The company has technical experts who have built an FPGA card that emulates the chip’s effects and it is estimated that simulations using this chip could make searches on Google as much as 1,000 times faster. Blueshift said its design could also make it much easier to program some data operations because it would remove the need to include complex instructions about how to handle the vast quantities of data involved. Since the amount of data being generated each year is on an exponential incline, the developers of the new form of memory, believe their new technology will better cope with the increasing demands.
How they plan to work
The specific problem the chip is aiming to solve is von Neumann bottleneck, which are sometimes referred to as a ‘memory wall’ or data tailbacks. The start-up firm designed the memory to address the gap between rapidly developing central processing units and slower progress in computer memory chips.
The disparity creates a “tailback” when high-performance computers perform large-scale operations, like database searches with millions of possible outcomes. The troves of data effectively gets stuck in a slow moving queue data structure between the CPU and the less-efficient memory which reduces the speed at which computers can deliver results.
CPU performance doubles every 1.5 years, while memory performance doubles every 10, suggesting a performance gap between the two growing by about 50% every year. Blueshift said its design could also make it much easier to program some data operations because it would remove the need to include complex instructions about how to handle the vast quantities of data involved.

“It would make some big data programming as straightforward as the basic data searches that computing students learn to write in high school,” Peter Marosan added.
Computationally expensive operations such as drug discovery, DNA research, artificial intelligence design and the management of future smart cities could be made much faster on the new memory.

This idea will allow certain complex operations to take place in a matter of minutes compared to tasks that take hours today. However, it is not expected to have an impact on simpler operations such as word processing.

The chip’s designers stress that this is only part of a solution that will require greater collaboration between various companies who are working on the “data tailback” challenge. They have built a working model to emulate the chip’s effects, ahead of the more expensive task of creating the first chip.

In testing it was demonstrated that the algorithms used in weather forecasting and climate change modelling could run 100 times faster using the chip. It could also improve the speed of search engines and the processing speeds of virtual reality headsets by as much as 1,000 times.

Blueshift is now seeking funding to create a full first iteration of the chip. The company said that changing the way in which computer memory works could also help the artificial intelligence in autonomous vehicles like driverless cars, which need to process huge quantities of data quickly to make decisions.
They added that fast, real-time data processing on a large scale will be essential in a future in which objects and people are likely to be closely connected in smart cities, with technology used to manage traffic flows, utility supplies, and even evacuation procedures in times of danger.