Fourth-graders in 2028 might grow their own cheese for lunch. In a concept called “Lunchabios,” researchers envision a Lunchables-like synthetic biology kit that would be marketed to children. Kids would use a bioreactor to culture cheddar, and then pair it with premade crackers and ham at lunch a few days later. A “Pro-GMO” certification on the package celebrates genetic modification, unlike GMO labeling today.
Lunchbox bioreactors are possible, the researchers say, because the technology is becoming cheap enough to make it accessible for everyone. If companies want to become more transparent about how they produce cultured food, and increase public literacy about synthetic biology, it’s likely that they’ll want to offer more hands-on experiences for consumers to try making that food themselves. It’s also likely that they’ll target children, whether or not parents support the idea.
PubFighters AR is an iPhone AR game that puts your body’s movement and power in the immersive world of augmented reality.
The player can play duels in the real environment with virtual objects. They can hit other players or targets with virtual plates and bottles.
This video game demonstrates the convergence of the physical-digital world and its application in sports. The sports ethics code forbids any activity that is not safe, and it may restrict some activities, but augmented reality can facilitate new games and sports.
http://futureportal.org/wp-content/uploads/2020/05/FuturePortal-300x138.png00Amir-reza Asadihttp://futureportal.org/wp-content/uploads/2020/05/FuturePortal-300x138.pngAmir-reza Asadi2021-04-26 07:31:062021-04-26 07:31:06PubFighters AR: an AR Sport
Phishing, ransomware, DDoS, Viruses, and Attack vectors are the most common cyber attack type(1). With the expansion of using AI in It’s predictable, we see more and more AI attacks shortly.
These “AI attacks” are fundamentally different from traditional cyberattacks(2), and current approaches are using against current cyberattacks that may not be appliable. AI Attacks may follow below goals:
Cause Damage: the attacker wants to cause damage by having the AI system malfunction. An example of this is an attack to cause an autonomous vehicle to ignore stop signs.
Hide Something: the attacker wants to evade detection by an AI system.
Degrade Faith in a System: the attacker wants an operator to lose faith in the AI system
Currently, The most common method used in AI Attacks is input attack, which makes a compromised AI system. It refers to manipulating what is fed into the AI system to alter the system’s output to serve the attacker’s goal(2).
We may see the following products in the near future:
Data Anomaly Detector made by computer security companies
AI training security certificate that ensures the quality of the AI System. (Five Star AI System ).