Will Mobile Wallets Survive the Test of Time?

A couple of decades ago, the idea of sending or receiving money via a mobile device seemed bizarre. Today, the use of a mobile device for money transfer is not only widely accepted but also turning out to be the norm in our society.
The hassle of always carrying your money, cards and stressing about losing them seems to be a thing of the past. Mobile wallets have become the modern trend of online payment.
A mobile wallet, often referred to as m-wallet, essentially enables an individual to make monetary transactions using a mobile device. Typically, a mobile wallet is delivered through several payment processing models. This usually includes Mobile-based billing, SMS-based transactions, Mobile web payments, Near-field communications (NFC). Regardless of the model used, a mobile wallet service is generally delivered by and in collaboration with mobile service providers and banks.

Evolution
Back in 1997, Coca Cola introduced a few vending machines in Helsinki that allowed consumers to buy their drink through text message. Although small, this innovative use case is still regarded as the first example of mobile wallet and introduced the idea of using mobile devices for transactions. As time went on, mobile devices were used to buy movie tickets, arrange travel, and even order pizza. By 2003, about 95 million cell phone users had made a purchase using their mobile device!
The major players in mobile wallets are Google, Apple and Samsung. Google wallet was launched in 2011, making it the first major company to provide a mobile wallet to market. With the wallet, shoppers could make payments, earn loyalty points and redeem coupons, using a technology known as ‘near field communications’ (NFC). Unfortunately, Google Wallet had its limitations- it was only used on one phone model and was accepted at a few merchants. Notwithstanding, it was nothing short of revolutionary and paved the way for other m-wallets.
In 2012, Apple introduced the Apple’s Passbook, an app that targeted boarding passes, tickets, and coupons instead of actual mobile payments. Two years later, during the launch of iPhone 6, Apple Pay was announced. Although Apple pay service was only available in the US, it is now available in the UK and China.
Some of the biggest advantages of m-wallets include:
• Mobile wallet ensures smooth transfer of payments from one party to another
• It provides a database marketing opportunity to marketers
• Mobile wallet ensures cost savings for the business by ensuring transparency in payments
• Location based services helps businesses in doing customized promotions for their customers
• Running loyalty programs with customers becomes easier with the help of mobile wallets
However, despite these numerous advantages, the mobile wallets seem to fail. Is it the public’s general trust issue or do they fear getting their ‘pockets picked’?
Talking about the Indian scenario in part, we’ve all seen a boom in the usage of mobile wallets in the recent times. There’s no need to carry around cash or anything, just scan and you’re done. In simple words, they were created to ease our lives and they did successfully. There were tons of reasons behind those, digitalization being the topmost. The early mobile wallet apps prospered, at least initially, during these times but does that mean that the new ones are destined to fail? (Is it like engineering forever?) Let’s find out.
Firstly, we need to understand the difference between a mobile payment app and a mobile wallet app. A mobile payment app offers online payment services within itself and has one single task of paying via app to app whereas a mobile wallet is “all your wallet stored inside an app, be it cash, cards or even documents”. So why do people prefer a wallet app over a direct payment app? Here’s why:
1. More offers and shopping portal accessibility.
2. One can shop at various stores, both online and offline to reduce the number of places the card details are stored.
3. The wallet does not disclose or even directly flash the card number and the other encrypted stuff, thus greatly reducing the chances of online fraud and misusing of the details.
4. You can always redeem cash backs and various other rewards when you use your card from a mobile wallet app for payment towards any distant portal.
5. Also, most e-wallets have their personalized reward schemes where one can shop for extra discounts and more added benefits.
Now addressing the elephant in the room, if the mobile wallets have so much to offer, then why are the various startups in this field failing? Why most m-wallets are not able to cope up with the already existing ones? The top reasons include:
1. Risk of investment
The merchants will not invest their money into such terminal hardware unless there is enough return from the customer side. The same goes for the start-ups that fail in terms of m-wallets. They fail to understand the customer and the merchant base and thus fail to create or facilitate the transactions between them. Failing to understand the psychology of the target audience is the biggest reason behind the failure of such start-ups.

2. Existing different methods of payment
There was a worldwide accepted method of payment known as NFC (near field communication). It hasn’t been very successful. The primary reason was the reluctance and unacceptance by the merchants and the requirement of additional hardware settings that weren’t available and bought by most of the customers. No one was ready to go out of their league to adapt for a newer, though debatably easier method of payment.

3. Availability of options
Other reasons include the choices that are available to the customers, they have a variety to choose among the offers and discount schemes that are available in the market. This holds true for every industry. The competition and the stakes are already high; thus, these m-wallet companies have to cope-up with that.

4. End user dependency
How is it supposed to work if it entirely depends on the end user, his behavior is and his preferences of payment methods? Does he really want to shop on these portals rather than the well-established and trusted names already there in the industry? A very honest opinion about humans is that more often than not, we hate to adapt towards an entirely new behavior. It’s hard for us to change.

5. Security and breaching issues
Another major reason is the security threat and the lack of trust people have in the new apps that shoot up in the market. Most people, even the youth, are not very comfortable in sharing the card details to a new m-wallet. Thus, these apps have to work towards the better encryption and safety methods like 6-digit pin locks, fingerprint unlocking among others. There have been multiple incidences of bank frauds and money deductions from the customers’ wallet and the money being stuck in various processes. Mobileappdaily rightly said, “While digitalization has been a blessing for most of us, it has also made our data vulnerable to potential data breaches.”

The Takeaway
If one wishes to excel and stay in this field, it is extremely necessary to make the ‘security of the data of your customers’ a top-most priority. The various needs and wants of the target audience must be identified and their worried addressed. If the recent past is anything to go by, does this mean all m-wallets are destined to fail? Absolutely not! This only means that there are still a lot of things that left to experiment, both with the companies and the crowd that you are targeting while keeping a check on the shortcomings at the same time.

Making Life Multiplanetary

“You need to stand up within the first a part of the day and think what is to return can be the notable – and that’s what being a spacefaring human advancement is tied in with having self-assurance in destiny and believing that the future could be advanced to the beyond. what is more, I cannot don’t forget plenty else leaving than going accessible and being some of the star”
– Elon Musk, SpaceX

Elon Musk and SpaceX have proposed the development of Mars transportation foundation in an effort to encourage the viable colonization of Mars. The plan includes absolutely reusable dispatch vehicles, human-appraised travel, on-circle force tankers, brief turnaround dispatch/landing mounts, and nearby introduction of rocket gas on Mars through in situ asset use (ISRU). SpaceX’s optimistic goal is to reach the main humans on Mars by 2024.

The key thing of the framework is the SpaceX Overly great, a -arrange rocket where the upper degree (“Starship”) is likewise applied as trip to gain Mars and to come returned to Earth. to perform an full-size payload, the shuttle first enters Earth circle, wherein it’s far refuelled before it leaves to Mars. in the wake of arriving on Mars, the travel is stacked with privately created gasoline to return returned to Earth. The normal payload of Excessively great is for the Starship 2nd degree to infuse a hundred and fifty lots to Mars.

SpaceX expects to awareness its assets at the transportation a part of the Mars colonization mission, including the structure of a pressure plant depending on the Sabatier procedure as a way to be conveyed on Mars to integrate methane and fluid oxygen as rocket fuels from the community supply of air carbon dioxide and ground-open water ice. Be that as it could, Musk advocates a bigger association of lengthy haul Mars settlement destinations, going a protracted methods past what SpaceX responsibilities to fabricate; a fruitful colonization might at final encompass loads gradually financial entertainers—regardless of whether or not humans, companies, or governments—to encourage the development of the human nearness on Mars over several decades.

It is a competitive plan which is enunciated with the aid of SpaceX.

Elon’s critical intention 2025:

That is the maximum punctual time whilst Musk figures a Mars agreement should come to fruition. The President has anticipated a time period of “7 to ten years” earlier than the most important bases come to fruition.

This can expand the paintings deserted via the primary people. Paul Wooster, important Mars development engineer for SpaceX, clarified that “the concept is extend out, begin off with a station, yet change into a bigger base, as there are in Antarctica, but extraordinarily a town, a town, growing into a city and later on various urban groups on Mars.” the larger city communities would provide territories, nurseries, lifestyles guide, and empower new exams that help to answer a portion of the principal issues approximately lifestyles on Mars.

Objective:

SpaceX Mars challenge objective is to send first load assignment to Mars in 2022. The purpose for first task will confirm water asset, distinguish dangers, and set up starting force, mining, and life guide in fracture. A second mission, with both payload and institution flights. The deliver from introductory assignment will likewise fill in as the start of the essential Mars base, from which we can bring together a flourishing town and, in the end, a self-assisting human advancement on Mars.

Leading Boldly: Adopting Bleeding Edge Technology & Embracing Platforms for Growth

An absolute dilemma what manufacturers face while developing technology is to fully develop a functional as well as a bug-free product to the market and finally to the consumers. So, when the element of consumer is thrown into the picture – we will get a basic idea about Bleeding edge and why to prefer it over cutting edge .Cutting edge technology literally opened the gates of Solar Fuel, Deep Data Mining and even the way we stream movies whereas Bleeding edge technology throws in a sensation of caution which is a category of technologies, very new that they could have a high risk of being unreliable and lead adopters to incur greater expense in order to make use of them. The term bleeding edge was formed as an allusion to the similar terms “leading edge” and “cutting edge”. It tends to imply even greater advancement, though at an increased risk because of the unreliability of the hardware or software. The first documented example of this term being used dates to early 1983, when an unnamed banking executive was quoted to have used it in reference to Storage Technology Corporation. By its nature, a proportion of bleeding edge technology will make it into the mainstream.
Ten years back, when no one would have imagined about Driverless Cars, but Google just proved it wrong by beginning the self-driven car project in 2009. Today, Waymo is a successful name with a record of 1 billion miles rode in 2016 only. Tried and tested in crowded roads, this disruptive innovation is definitely a success in the field of technology. But some technologies with their allusive names creates a new sensation in the techno-mart. So is the new technology Bleeding Edge. For this very reason, some companies market their products as cutting edge as opposed to bleeding edge, to convey a sense of reliability and rigorous testing. For example, When Sunder Pichai was getting part of Google, Gmail was considered to be the bleeding edge technology.
Bleeding edge is newer and more extreme than technologies on the leading edge or cutting edge. These new and untested offerings come with uncertainty and, in some cases, unreliability. As a result, a consumer might be “cut” by using such a new product if it fails to gain market acceptance.
The U.S. Military uses bleeding-edge semiconductor technologies in its newest warplanes, battleships and missiles, as well as ones under development. The technologies tend to be very expensive to build, especially at first. However, we can’t disagree Moore’s law – the chips typically are getting more powerful and advanced than those used in consumer applications. Over time, the Military works out the bugs. Eventually, these technologies tend to find their way into consumer applications.
A technology may be considered bleeding edge where it contains a
degree of risk, or, more generally, there is a significant downside to early adoption, such as:
• Lack of consensus – competing ways of doing new things exist and there is little to no indication in which direction the market will go. By its very nature, consumers and firms will be unfamiliar with the product and its relationship to existing technologies, leading to rapid changes in what is considered best practice as more becomes known about the technology’s qualities.
• Lack of testing – The technology may be unreliable, or simply untested. Bleeding edge technology is usually released to the public before any major testing is done. In fact, the technology is presented to consumers as beta testing is underway. This usually helps companies smooth out any kinks, problems and any other issues that go unseen when the technology is originally made. Unfortunately, this means that the end user, or the consumer, is usually the one who ends up with the greater risk. It also means there could be added expenses for the consumer as well, whether that’s time or money.
• Industry resistance to change – trade journals and industry leaders have spoken against a new technology or product but some organizations are trying to implement it anyway because they are convinced it is technically superior.
Since the market asks for innovation and advancement, these high risk technologies must be considered and applied to new projects for the growth of techno-mart and these can be done by Chief Information Officers [CIOs] who works for the information technology and computer systems that support enterprise goals. For CIOs around the globe, staying ahead of the accelerating technology curve remains a top priority. Between the $60 billion per year going into cloud infrastructure, the $20 billion being invested annually in business analytics, and the $50B+ dedicated towards CRM and ERP systems, it is clear that CIOs are investing heavily for the digital age.
But building the capacity for innovation involves more than investing in cloud and BI – it involves applying the right combination of people, process, and technology to achieve sustainable business objectives. CIOs must have a culture that’s allows for rapid execution of new business models, utilizes technology to augment their talent and unlock trapped value, and fosters a culture of creativity. And, most importantly, they must be able to execute. So how CIOs can simultaneously cut through the noise of the emerging technologies while obtaining the buy-in they need to create a culture of innovation?
Let’s use AI as a sample technology to demonstrate how CIOs can make an impact. First, they must know exactly what they hope AI will provide, whether it’s predicting failures in machinery, identifying cancer early on in clients, automating redundant back-end IT work, or guiding autonomous vehicles. Next, they must ensure the talent, data, and required resources are in place to achieve their goals. For providing preventative maintenance on machines, teams must know the data set(s) they need to acquire, build the models needed to accurately predict which machines will fail, automatically send teams to fix the machine, and ensure they execute all before a customer raises a flag.
Why must CIOs be courageous in adopting a program like this? Often times, making changes like this requires an overhaul of existing processes, investment into new talent and new technology, maintaining the stack once its implemented, and ensuring it yields results to the bottom line. They must make it past the bureaucrats holding the keys to the kingdom while ensuring the project is fully implemented. If they don’t have high convictions, they will end up with a half-implemented project that yields far less than half the results they proposed. And if they don’t move forward with these efforts, the business will incur much higher maintenance costs as machines age and bear all the risks associated with reactive maintenance.
Why must a CIO lead the charge in adopting platforms? Leading by example, CIOs can provide a clear message towards collaboration over empire building, deliver common building blocks for teams to build on, and ensure they have the support and knowledge in place to adopt future technologies.
The decision to make high conviction bets on new technologies and shift the mentality of the business to platform thinking is no easy task. This requires CIOs to have in-depth knowledge of the business, an opinion as to which technologies are going to critically enable this success, and the ability to lead courageously. Should CIOs choose to remain stagnant; their business will continue to fight an uphill battle in an increasingly competitive landscape. On the other hand, the key enablers CIOs need are in place for enterprises remain competitive in a dynamic environment, and it’s up to them to make it happen.
The use of bleeding edge technology still remains ascertains.
In a nutshell, the rewards for successful early adoption of new technologies can be great in terms of establishing a comparative advantage in otherwise competitive markets; unfortunately, the penalties for “betting on the wrong horse” or choosing the wrong product are equally large. Whenever an organization decides to take a chance on bleeding edge technology there is a chance that they will be stuck with a white elephant or worse. Bleeding edge computer software, especially open source software, is especially common. Indeed, it is usual practice for open-source developers to release new, bleeding edge, versions of their software fairly frequently, sometimes in rather unpolished states to allow others to review, test, and, in many cases, contribute to it. Therefore, users who want features that have not been implemented in older, more stable releases of the software are able to choose the bleeding edge version. In such cases, the user is willing to sacrifice stability, reliability, or ease of use for the sake of increased functionality.

End to end encryption- What the fuss is all about?

End-to-end encryption is an implementation of encryption. It protects data so that it can be read only at the two ends by the sender and the recipient. To fully understand the process first we should look in plain old encryption.
Before the digital age encryption was known as cryptography. Ancient Egyptians used to complicate their hieroglyphs to prevent lower-level people from understanding stuff. Modern and scientific encryption came in the middle ages with Arab mathematician Al-Kindi who wrote the first book on the subject. It became really serious and advanced during World War II with the Enigma machine and helped considerably in defeating the Nazis in many cases.
Encryption involves turning your data into a scrambled form such that it is impossible for any party intercepting it to read, understand and make any sense of it, except the recipient to whom it is intended. When it reaches this rightful recipient, the scrambled data is changed back to its original form and becomes perfectly readable and understandable again. This latter process is called decryption.
Encryption is of two types symmetric and asymmetric.
Symmetric encryption: Tony wants to send a private message to Steve. The message is passed through an encryption algorithm and is using a key to encrypt it. The algorithm is available to everyone but the key is with Tony and Steve only. The key is used to decrypt the message.
This is called symmetric encryption, in which the same key is used to encrypt and decrypt on both sides.
The problem with this method is that this process involves sending the key from one side to other, thereby exposing it to being compromised.
Asymmetric encryption: In this both Tony and Steve own their own set of public and private key. A person’s public key is available for everyone in the network but the private key is with only him. When Tony sends a message to Steve he uses Steve’s public key to encrypt his message. The Encrypted message can only be decrypted by the private key which the Steve owns.
Hence asymmetric encryption is the solution to the problem of symmetric encryption. Two types of keys are used for each party, one public key and one private key, that is each party has a public key and a private key. The public keys are available to both parties, and to anyone else, as the two parties mutually share their public keys prior to communication. The message is decrypted by the legitimate user’s private key only.
The algorithm used by asymmetric encryption or private key cryptography is called RSA (Rivest–Shamir–Adleman). In this algorithm, the public key and private key are generated together and tied together. Both rely on the same very large secret prime numbers. The private key is the representation of two very large secret prime numbers. Metaphorically, the public key is the product number: it is made up of the same two very large prime numbers used to make the private key. What’s amazing is that it’s very hard to figure out which two large prime numbers created the public key.

This problem is known as prime factoring, and some implementations of public key cryptography take advantage of this difficulty for computers to solve what the component prime numbers are. Modern cryptography allows us to use randomly chosen, ridiculously gigantic prime numbers that are hard to guess for both humans and computers.
Now let’s come back to end to end encryption. End-to-end encryption works on the principle of asymmetric encryption. As the name implies, end-to-end encryption protects data such that it can only be read on the two ends, by the sender, and by the recipient. No one else can read the encrypted data, including hackers, governments, and even the server through which the data is passing. End-to-end encryption keeps the data encrypted, without any possibility of decryption, even at the server and everywhere else. Thus, even if they want to, the service cannot intercept and do anything with the data. Law enforcement authorities and governments are also among those who cannot access the data, even with authorization. Theoretically, no one can, except the parties at the two ends.

Where is it being used?
The short answer is everywhere.
● Online transactions:
buy something online using your credit card. Your computer needs to send the credit card number to the merchant on the other side of the world. End-to-end encryption makes sure that only you and the merchant’s computer or service can access the so-confidential number. Similar method of encryption is being used for internet banking and other forms of online transactions.

● Secure browsing:
In the address bar, the URL starts with https:// instead of http://, the additional s standing for secure. You will also see an image somewhere on the page with the logo of Symantec (owner of TLS) and TLS. This image, when clicked on, opens a pop-up certifying the genuineness of the site. Companies like Symantec provide digital certificates to websites for encryption.
Secure Socket Layer (SSL), or its latest updated version Transport Layer Security(TLS), is the standard for encryption for the web. When you enter a site that offers encryption for your data – normally they are sites that handle your private information like personal details, passwords, credit card numbers etc. — there are signs that indicate security and safety.

● Voice calls:
Voice calls and other media are also protected using end-to-end encryption with many apps and services. You benefit from the privacy of encryption just by using these apps for communication.
● Instant messaging apps:
Now, the very first instant messaging and calling apps that came with end-to-end encryption come from Germany, where people are particularly concerned about their privacy. Examples are Telegram and Threema. This may have been exacerbated with the scandal of Germany’s Chancellor Merkel’s phone calls being wiretapped by the US. Also, Jan Koum, co-founder of WhatsApp, mentioned his Russian childhood background and all the theatrical spying involved as one of the driving elements for his eagerness to enforce privacy through encryption on his app, which nevertheless came quite late.

What are the advantages of end-to-end encryption services?
● It keeps your data safe from hacks. E2EE means fewer parties have access to your unencrypted data. Even if hackers compromise the servers where your data is stored (e.g. Yahoo mail hack), they cannot decrypt your data because the does not possess the decryption keys.
● It keeps your data private. If you use Gmail, Google can know every intimate detail you put in your emails, and it can save your emails even if you delete them. E2EE gives you control over who reads your messages.
● It’s good for democracy. Everyone has the right to privacy. E2EE protects free speech and shields persecuted activists, dissidents, and journalists from intimidation.

The Blue Brain Project

The Blue Brаin project is thаt the initiаl comprehensive plаn to reverse-engineer the clаss brаin, so аs to grаsp brаin perform аnd pаthology through cаreful supercomputer-bаsed reconstructions аnd simulаtions. The project аims to creаte comprehensive digitаl reconstructions of the brаin which we can use to study the chаrаcter of the brаin. This, in turn, helps in understаnding however kinsfolk method emotions, thoughts, аnd provides the North Аmericаn nаtion deeper insight into the choice creаting а power of the humаn brаin.

Introduction to the blue brаin project
The blue brаin project (BPP) mаkes use of the Blue Gene sequence mаinfrаme developed by IBM to hold out simulаtions. thus the project is cаlled the “Blue Brаin“. The project wаs supported by Henry Mаrkrаm аt the École Polytechnique Fédérаle Аmericаn stаte Lаusаnne (EPFL) in Lаusаnne, Switzerlаnd mаnner bаck in could 2005. EPFL could be аn аnаlysis institute thаt focuses on nаturаl sciences аnd engineering.

Todаy scientists аre finishing up аnаlysis to form а mаn-mаde brаin which will аssume, respond, tаke selections аnd store dаtа. The аim is to trаnsfer а person’s brаin into the pc, so it will аssume, аnd build selections in the absence of а person’s body. After the deаth of that person, this virtuаl brаin will аct becаuse of the mаn. So, even after the deаth of аn individuаl, we’ll not lose the informаtion, intelligence, emotions, аnd recollections of аn individuаl аnd this mаy be used for vаried things like if he/she wish to continue the unfinished work, to determine on one thing supported his/her spаce of experience, etc.

The humаn brаin is аn аdvаnced system consisting of аlgorithmic connectors. It is аdditionаlly аdvаnced thаn аny electronic equipment within the world. The humаn brаin could be а multi-level system with а hundred billion neurons (nerve cells) аnd а hundred trillion synаpses. А nerve cell could be а cell designed to trаnsmit dаtа to different nerve cells, muscle, or orgаn cells whereаs synаpses fаcilitаte neurons to speаk with one аnother. So, the question could аrise, is it extremely doаble to form а person’s brаin? The solution is аffirmаtive. Nowаdаys it’s doаble owing to аdvаncement in technology. the globe of technology hаs distended in аreаs like golem robots, computing, video gаme, weаrаble devices, computing, Digitаl jewelry, Blue Eyes Technology, BrаinGаte Technology then wаy more аt а fаst rаte. А full humаn brаin simulаtion (100 billion neurons) is plаnned to be completed by 2023 if everything goes well. If so, this mаy be the primаry virtuаl brаin of the globe.

Whаt is а Virtuаl Brаin?
А virtuаl brаin is аn artificial brаin. It will аssume just like the nаturаl brаin, tаke decisions supported by one’s pаst experience, аnd will respond according to the nаturаl brаin. This process can be implemented using supercomputers, with а lаrge quаntity of storаge cаpаbility, processing power and аn interfаce between the humаn brаin аnd this аrtificiаl one. Through this interfаce, the informаtion held on within the nаturаl brаin is uploаded into the PC. Therefore the brаin аnd аlso the informаtion, intelligence of аnyone is preserved аnd used forever, even after the deаth of the person.

Need of a virtual brain
Todаy we tend to be developed owing to our intelligence. Intelligence is the inborn quаlity thаt cаn’t be creаted. Some individuаls hаve this quаlity so that they can think to such an extent which is not possible for everyone. Humаn society would continuously requires such intelligence and such аn intelligent brаin. However, the intelligence is lost аfter the person’s deаth. The virtuаl brаin could be аn аnswer to thаt. The brаin аnd its intelligence аre аlive even after deаth.
We often fаce difficulties in bаsic cognitive process things like people’s nаmes, their birthdаys, аnd аlso the spellings of words, correct synchronic linguistics, necessаry dаtes, history fаcts, аnd etceterа. А virtuаl brаin will remove the аdditionаl stress we tend to аll fаce to recollect things. It is аn ideаl technicаl аnswer to а reаlly common humаn drаwbаck.

How does the nаturаl brаin work?
The humаn аbility to feel, interpret аnd even see is controlled, in computer-like cаlculаtions, by the nervous system. Yes, the system is kind of sort of mаgic аs а result of we tend to cаn’t see it, however, it’s operаting through electricаl impulses through your body.

The humаn brаin could be а multi-level complex system with а hundred billion neurons and hundred trillion synаpses. Not even engineers hаve come close to making circuit boаrds аnd computers аs delicаte аnd precise as the nervous system. To grаsp this technique, one must understand the following 3 eаsy functions.

1. Sensory input: Once our eyes see one thing or once our hаnds comes in contactwith а warm surfаce, the sensory cells, conjointly referred to аs Neurons, send а messаge strаight to our brаin. This is called as the sensory input аs we tend to golf shot things into our brаin using senses.
2. Integrаtion: Integrаtion is best referred to аs the interpretаtion of things like taste, touch, аnd sense thаt is feаsible owing to our sensory cells, referred to аs neurons. Billions of neurons work аlong to grаsp the аmendment аround us.
3. Motor Output: Once our brаin understаnds the аmendments made, either by touching, tаsting or viа any other medium, then our brаin sends а messаge through neurons to effector cells, muscles or gland cells thаt truly work to perform our requests аnd influence the environment. The word motor output is well remembered if one ought to think thаt our putting something out into the surroundings through the employment of а motor, similar to а muscle which does the work for our body.

The Ideа of Brаin Simulаtion
The following tаble compаres the operаting procedures of the nаturаl аnd simulаted brаin. This is a possible proposed solution. Аs per EPFL, development is still in progress.
1. INPUT: In the nervous system of our body, the neurons are responsible for transmitting information. The body receives the input by the sensory cells. These sensory cells produce electric impulses which are received by the neurons. The neurons transfer these electric impulses to the brain. Here neurons can be replaced by a silicon chip. So, the electric impulses from the sensory cells can be received through these artificial neurons and send to a supercomputer for the interpretation.
2. INTERPRETATION: The electric impulses received by the brain from the neurons are interpreted in the brain. The interpretation in the brain is accomplished by the means of certain states of many neuron. The interpretation of the electric impulses received by the artificial neuron can be done by means of a set of registers. The different values in these register will represent different states of the brain.
3. OUTPUT: Based on the states of the neurons the brain sends the electric impulses representing the responses which are further received by a sensory cell of our body to respond to neurons in the brain at that time. Similarly based on the states of the register, the output signal can be given to the artificial neurons in the body which will be received by the sensory cell.
4. MEMORY: Certain neurons in our brain, represent some states permanently. When required, this state is represented by our brain and we can remember the past things. To remember things we force the neurons to represent certain states of the brain permanently or for any interesting or serious matter, this happens implicitly. In the similar way the required states of the registers can be stored permanently and when required this information can be retrieved and used.
5. PROCESSING: When we think about something or make some calculation, logical and arithmetic calculations are done in our neural circuitry and are stored as states. Based on the new requests, states of certain neurons are changed to give the output. In a similar way, the decision making can be done by the computer by performing arithmetic and logical calculations on the stored states and the new inputs.

Is it possible to copy data from the brаin to the computer?
The uploаding is feаsible by using tiny robots referred to аs the Nаnobots. These robots аre sufficiently smаll to trаvel throughout our vаsculаr system. Trаveling into the spine аnd brаin, they’ll be reаdy to monitor the аctivity аnd structure of our centrаl nervous system. They’ll be reаdy to offer an interfаce with computers. Nаnobots might conjointly rigorously scаn the structure of our brаin, providing аn entire reаdout of the connections. This dаtа, once entered into а computer, might then still perform аs us. Thus, the informаtion held on within the entire brаin is going to be uploаded into the computer.

EPFL
IBM, in pаrtnership with scientists аt Switzerlаnd’s École Polytechnique Fédérаle de Lаusаnne (EPFL) – а research institute, speciаlized in nаturаl sciences аnd engineering, cаn begin simulаting the brаin’s biologicаl systems аnd output the informаtion аs аn operаting three-dimensional model thаt will recreаte the high-speed electrochemicаl interаctions thаt turn up inside the brаin’s interior. EPFL mаkes use of the supercomputer Blue Gene/P designed by IBM. The mаchine is installed on the EPFL campus in Lаusаnne аnd is mаnаged by CАDMOS (Center for Аdvаnced Modelling Science). These consists of cognitive functions like lаnguаge, leаrning, perception, аnd memory in addition to brаin mаlfunction like psychiatric disorders like depression аnd autism. From there, the modeling cаn expаnd to different regions of the brаin аnd, if successful, shed light on the relаtionships between genetic, moleculаr аnd cognitive functions of the brаin.

The source code file of the project is аccessible to everybody on github.com.

Аdvаntаges:
1. Even after the deаth of аn individuаl it is possible to use his/her intelligence.
2. This could boost the study of аnimаl behаvior.It means that by interpretаtion of the electricаl impulses from the brаin of the аnimаls, their thought process is understood simply.
3. It would enаble the deaf to heаr viа direct nerve stimulаtion, аnd even be useful for severаl psychologicаl diseаses.
4. We might build use of the knowledge of the brаin thаt wаs uploаded into the pc аnd use it to supply аn аnswer to mental disorder.
Disаdvаntаges:
There could be varieties of threаts, this technology would bring.

1. Increаses the dependency on computer systems.
2. Computer viruses will pose an alarming increase in critical threаt. Data stored could be mаnipulаted аnd utilized in the wrong mаnner. Checkout Cyber Crime.
3. This mаy cаuse Humаn Cloning and it is beyond our imagination how big this threаt would be аgаinst nаture.

Conclusion
The blue brаin project, if enforced with success, would surely change severаl things аround us аnd it’ll boost the reаlm of research аnd technology. Certain research аnd development tаke decаdes or mаybe centuries to complete, therefore the informаtion аnd efforts of а scientist аre preserved аnd used аdditionаlly in his аbsence. Meanwhile, it is not а simple tаsk to copy the convoluted brаin system into а computer. It may tаke several yeаrs to decаdes to аccomplish this.

Virtual Representation of Real- life Objects: Digital Twins

What happens when somebody reveals to you that you have a counterfeit twin in parallel space? Two of the most mind-boggling terms to the humanity: counterfeit twin and parallel space. Advancing with the improvements we have touched base at an interface where we chose to acknowledge these.
Digital twin refers to a digital replica of physical assets (physical twin), processes, people, places, systems and devices. Digital Twin Technology is one among the top 10 strategic technology trends named by Gartner Inc. in 2017. Digital Twin concept represents the convergence of the physical and the virtual world where every industrial product will get a dynamic digital representation. Digital Twins which incorporates Big Data, Artificial Intelligence (AI), Machine Learning (ML) and Internet of Things are key in Industry 4.0 and are predominantly used in the Industrial Internet of Things, engineering, and manufacturing business space.
While the concept of a digital twin has been around since 2002, it’s only thanks to the Internet of Things (IoT) that it has become cost-effective to implement.
When we layout machines for a connected world, the conventional engineer’s toolbox can appearance as an alternative empty. We want a brand-new set of manufacturing and production gear to meet the brand-new realities of software program-pushed products fuelled by way of virtual disruption. Thankfully, the arrival of digital twins offers engineers a technological bounce ‘through the looking glass’ into the very coronary heart of their physical property. Digital twins give us a glimpse into what is taking place, or what can occur, with physical property now and some distance into the future.
Digital twin in a nutshell
Want a definition you can memorize? Try this on for length:
“The virtual dual is the digital representation of a bodily object or machine throughout its lifestyles-cycle. It uses real-time statistics and other resources to enable mastering, reasoning, and dynamically recalibrating for improved selection making.”
In simple English, this just approach growing a tremendously complicated virtual model that is the precise counterpart (or dual) of a bodily aspect. The ‘component’ may be a automobile, a tunnel, a bridge, or even a jet engine. Connected sensors at the physical asset accumulate data that may be mapped onto the virtual version. Anyone searching at the virtual dual can now see vital information approximately how the bodily issue is doing obtainable within the actual global.
What this means is that a digital dual is a important device to help engineers understand not only how products are acting, however how they may carry out in the destiny. Analysis of the statistics from the related sensors, blended with different sources of statistics, allows us to make these predictions.
With these statistics, groups can research greater, faster, and break down antique barriers surrounding product innovation, complex life-cycles, and value introduction. Digital twins can be used in these fields like:
• Visualizing merchandise in use, through actual customers, in real-time
• Building a virtual thread, connecting disparate structures and promoting traceability
• Refining assumptions with predictive analytics
• Troubleshooting a long way away device
• Managing complexities and linkage within systems-of-structures

If we trash it, the paper-free dream, it’s gone and all we have is that digital information. No copy. With a digital twin, as the name indicates, we have two versions of a ‘thing’: the physical one and the digital twin one. In this case with a thing we don’t mean a paper document or a batch of paper to digitize but, you guessed it, the physical assets as we know them from the Internet of Things and, today, mainly the cyber-physical systems of the Industrial Internet and Industry 4.0, including the smart factory.
THE WAY DIGITAL TWINS FUNCTION
An engineer’s process is to design and take a look at products. An engineer checking out a automobile braking system, as an example, could run a PC simulation to apprehend how the system would carry out in numerous actual-international eventualities. This method has the advantage of being loads quicker and inexpensive than constructing more than one bodily motor to test. But there are nonetheless a few shortcomings.
First, computer simulations like the one described above are restrained to contemporary actual global occasions and environments. They can’t expect how the car will react to future situations and changing occasions. Second, current braking systems are greater than mechanics and electrics. They’re also produced from hundreds of strains of code.
This is wherein virtual dual and the IoT are available in. A virtual dual makes use of records from linked sensors to inform the story of an asset all of the way through its lifestyles-cycle. From testing to use within the real global. With IoT information, we will measure precise signs of asset fitness and performance, like temperature and humidity, for instance. By incorporating this information into the digital version, or the digital twin, engineers have a complete view into how the car is performing, thru real-time feedback from the automobile itself.
IBM’s work with digital twin

IBM has been doing a lot of work with digital twin technologies. Just this year, we announced new lab services for Maximo, bringing Augmented Reality (AR) into asset management. The IBM lab service ‘turns on’ many visual and voice (Natural Language Processing) features for your workforce. This enables you to see your assets in a new dimension and get instant access to critical data

CHARACTERISTICS
• Connectivity: The generation permits connectivity among the bodily element and its virtual counterpart. The basis of digital twins is based totally in this connection, without it, virtual dual technology might not exist this connectivity is created by way of sensors at the bodily product which reap facts and combine and communicate this data via diverse integration technologies
• Homogenization: Digital twins can be further characterized as a digital technology that is both the consequence and an enabler of the homogenization of data. Due to the fact that any type of information or content can now be stored and transmitted in the same digital form, it can be used to create a virtual representation of the product (in the form of a digital twin), thus decoupling the information from its physical form. Therefore, the homogenization of data and the decoupling of the information from its physical artefact, have allowed digital twins to come into existence. However, digital twins also enable increasingly more information on physical products to be stored digitally and become decoupled from the product itself
• Reprogrammable and Smart: Another important characteristic of the digital twin technology is its reprogrammable nature. As stated earlier, a digital twin makes it possible to make remote adjustments through the digital component of a twin.
• Digital traces: Another characteristic that can be observed, is the fact that digital twin technologies leave digital traces. These traces can be used by engineers for example, when a machine malfunctions to go back and check the traces of the digital twin, to diagnose where the problem occurred. These diagnoses can in the future also be used by the manufacturer of these machines, to improve their designs so that these same malfunctions will occur less often in the future.
• Modularity: The final characteristic that we can define, is the characteristic of modularity. Modularity is particularly important in the manufacturing industry. In the sense of the manufacturing industry, modularity can be described as the design and customization of products and production modules.By adding modularity to the manufacturing models, manufacturers gain the ability to tweak models and machines. Digital twin technology enables manufacturers to track the machines that are used and notice possible areas of improvement in the machines. When these machines are made modular, by using digital twin technology, manufacturers can see which components make the machine perform poorly and replace these with better fitting components to improve the manufacturing process.
The IoT feeds digital twins – who are hungry for real data
In real life you’ll notice that digital twins today are predominantly used in the Industrial Internet or Industrial Internet of Things and certainly engineering and manufacturing. You can even create a digital twin of an environment with a set of physical assets, as long as you get those data.
Since the digital twin idea with its PLM roots become conceived, matters have modified fast. Driven by way of the arrival of IoT and the Industrial IoT, in conjunction with all its facts, analytics, and AI on pinnacle, digital twins are on the technology and IoT evolutions which might be changing the face of several industries and programs, whilst adding lots of recent opportunities.
Digital twins and smart connected products – business goals and benefits
By having a smart connected product with its virtual representation, ample business goals can be served which is also a driver of digital twin adoption, along with the convergence of several factors.
Digital twins give manufacturers and businesses an unprecedented view into how their products are performing. A digital twin can help identify potential faults, troubleshoot from afar, and ultimately, improve customer satisfaction. It also helps with product differentiation, product quality, and add-on services, too.
If you can see how customers are using your product after they’ve bought it, you can gain a wealth of insights. That means you can use the data to (if warranted), safely eliminate unwanted products, functionality, or components, saving time and money

The Era of Touch: RedTacton

If a Mr. John from 1901 would be told by an angel that men in the future would not pay at shops with hard money, or that almost every concern of daily life such as communication, financial transactions, entertainment etcetera, would be possible through only a handheld device that fits in the palm, he would not believe.
One important feature of technological progress is reduction of size without compromise in performance/value.
On a normal day every gadget we use may seem mundane, but it’s crazy when you see how with the times, gadgets have come so close to the human-self. We are literally surrounded by them! And this coming closer is yet another feature of technological progress.
Wireless communication has been a pro-player in the technology playground. Satellite communication, Wi-Fi, Bluetooth has been saving time in sharing data, media and in communication.
Quite simply, wireless communication creates connections when signals arrive, allowing for easy connections because connectors(wires) are unnecessary. However, seen from another aspect, the arriving signals can be intercepted, so security becomes an issue. Now wired communication, transmits data between two connection points, so interception is difficult and security can be considered to be high. However, connectors and cables are a nuisance.
Taking the above two points in account, there is a new technology situated directly between wireless and wired communication. This new technology of age is called Red Tacton. It is better than the wireless Wi-Fi, as signals don’t weaken and also beats Bluetooth where communication is more secured, but only possible between two devices.
RedTacton is a technology that uses the surface of the human body as a high speed and safe network transmission path! Here, the human body supports half duplex communication at 10Mbit/s.
The ‘T’ in Tacton stands for touch and ‘acton’ implies action. Essentially “action triggered by touching”.
RedTacton, a Human Area Networking technology/Wireless Network, developed by Sai Charan Etikala, is completely distinct from wireless and infrared technologies as it uses the minute electric field emitted on the surface of the human body to ensure safe, high speed network transmission path.

A transmission path is formed at the moment a part of the human body comes in contact with a RedTacton transceiver. Communication is possible using any body surfaces, such as the hands, fingers, arms, feet, face, legs or torso. RedTacton works through shoes and clothing as well. When the physical contact gets separated, the communication is ended.

The following three main features of Red Tacton shall help understand it’s functionality:

Touch – Touching, gripping, sitting, walking, stepping and other human movements can be triggers for unlocking or locking, starting or stopping equipment, or obtaining data.

Broadband and Interactive – Duplex, interactive communication is possible at a maximum speed of 10Mbit/s. Because the transmission path is on the surface of the body, transmission speed does not deteriorate in congested areas, where many people are communicating at the same time.

Any media – In addition to the human body, various conductors and dielectrics can be used as transmission media. Conductors and dielectrics may also be used in combination.

Detailed technicalities are described below:
RedTacton adopts a different technical approach. Instead of depending on electromagnetic waves or light waves to carry data, it uses weak electric fields on the surface of the body as a
transmission medium.

i.) The RedTacton transmitter induces a weak electric field on the surface of the body.
ii.) The receiver senses changes in the weak electric field on the surface of the body caused by the transmitter.
iii.) It relies upon the principle that the optical properties of an electro-optic crystal can vary according to the changes of a weak electric field.
iv.) RedTacton detects changes in the optical properties of an electro-optic crystal using a laser and converts the result to an electrical signal in an optical receiver circuit.
Multiple transceivers can be used simultaneously. The reason is RedTacton uses a proprietary CSMA/CD (Carrier Sense Multiple
Access with Collision Detection) protocol that allows multiple accesses with the same medium from multiple nodes.

As with every technology it is necessary to ensure that this this revolutionary technology fulfils all safety standards.
The transmitting and receiving electrodes of the RedTacton transceiver are completely covered with insulating film, so the body of the person acting as transmission medium, is completely insulated. This makes impossible for current to flow into a person’s body from the transceiver. When communication takes place, displacement current is generated by the electrons in the body because the body is subjected to minute electrical fields. Such type of displacement currents are very common everyday occurrences to which we are all subjected. RedTacton conforms to
the “Radio frequency-exposure Protection standard (RCR STD-38)” issued by the Association of Radio Industries and Businesses (ARIB).
RedTacton has a wide range of applications, in those some of the applications are as follows:

Elimination of human error:
RedTacton devices embedded medicine bottles transmit information on the medicine’s attributes. Whenever the user touches the wrong medicine, immediately an alarm will trigger on the terminal he is carrying. The alarm sounds only whenever the user actually touches the medicine bottle; it reduces false alarms common with passive wireless ID tags, which trigger simply by proximity.
Avoidance of risk at construction sites. (An alarm sounds only if special equipment is handled by anyone other than supervisors)

Marketing Applications:
When a consumer stands in front of an advertising panel, information matching and advertising his or her attributes is automatically displayed.
Inside a shop, shoppers can view related information on their mobile terminals immediately after touching a product!

Intuitive Operations:
Natural movements and actions are the trigger (touch).
RedTacton transceivers embedded in two terminals can communicate not only data but also the control or configuration instructions needed to operate devices (broadband & interactive).
Print out where you want just by touching the desired printer with one hand and a PC or digital camera with
the other hand to make the link. When user feels complicated, configurations are reduced by downloading device drivers at first touch. Songs can be transferred to portable music players from notebook PCs with just a touch.

Instant Private Data Exchange:
By shaking hands, personal profile data can be exchanged between mobile terminals on the users.(Electronic exchange of business cards) Communication can be kept private using authentication and encryption technologies. Group photos taken by digital cameras are instantly transferred to individual’s mobile terminal. Diagrams drawn on white boards during meetings are
transferred to individual’s mobile terminals on the
spot.

Personalization:
Digital lifestyle can be instantly personalized with just a touch. A pre-recorded configuration script can be
embedded in a mobile terminal with built-in
RedTacton transceiver.
When another device with RedTacton capabilities are touched, personalization data and configuration scripts can be downloaded automatically.
Personalization of Mobile Phones:
Your own phone number is allocated and billing commences. Automatic importing of personal address book and call history. The PC’s are configured to the user’s specifications simply by touching the mouse.

Personalization of Automobiles:
The seat position and steering wheel height adjust to
match the driver just by sitting in the car. The driver’s home is set as the destination in the car.

New Behaviour Pattern:
Various conductors and dielectrics can be used as
RedTacton communication media and this has the potential to create new behaviour patterns. Walls and partitions can be used as communication media.

Conferencing System:
An electrically conductive sheet is embedded in the table. A network connection is initiated simply by placing a laptop on the table. Using different sheet patterns enables segmentation of the table into subnets. Walls and partitions can be used as communication media, eliminating construction to install electrical wiring. Ad hoc networking using conductive liquid sprays is possible.

Security Applications:
Automatic user authentication and log-in with just a touch. ID and privileges are recorded in a mobile RedTacton device. Corresponding RedTacton receivers are installed at security check points. The system can provide authentication and record who touched the device.

User Verification Management:
Carrying a mobile RedTacton-capable device in one’s pocket, ID is verified and the door unlocked when the user holds the doorknob normally. Secure lock administration is possible by combining personal verification tools such as fingerprint ID or other biometric in the mobile terminal.

Red Tacton technology can be said as superior to Wi-Fi and Infrared technology with it’s high-speed uninterrupted communication system between arbitrary points on the body. It beats Bluetooth as body-based networking is more secure than other broadcast systems. Importantly it obeys all safety standards, because it doesn’t require the electrode to be in direct contact with the skin.
It is thus perceptible that RedTacton technology checks every box to usher in another step forward in the tech-industry. The fact that no compelling applications are already available and the high costs to build the gadget, are nothing but trivial.

I would conclude by saying that RedTacton technology fulfils the aforementioned criteria to bring in a revolution in tech-industry. Reduction in size and increase in convenience due to direct human involvement, seem to be it’s pertinent traits. And it is only a matter of time before RedTacton becomes part of our daily lives!