Primitive Technology: Pit and chimney furnace

Demolition of the old furnace Dig a deep pit The furnace is built below the ground, about 25 cm square Furnace grid Flue Furnace door brick Building a chimney 2 meter high chimney Burning charcoal Unburned charcoal wood can be left for the next treatment Available charcoal (higher burning temperature than ordinary wood) Iron bacteria Burning grass ash (reducing the melting point of iron ore) Mix iron ore, charcoal powder and grass ash into a cylindrical block Slag falls through the grate Remove the molten cylindrical block Granular pig iron Granular pig iron You can see the rust and prove that it is indeed iron.

Furnace with deep pit and chimney .

MBR, MBBR and FBR (Part 2) – Comparison of wastewater technologies

The comparison of the three systems show different advantages and disadvantages for industrial wastewater applications effluent water quality MBRs show in general a slightly better BOD removal than MBBRs or FBBRs Very fine membranes can even hold back germs so that the water effluent quality is in general better Resistance to influent Peaks and grease leaks MBRs are very sensitive to changing influent values Grease leaks can cause clogging of the fine membranes so that they must be cleaned or replaced MBBRs are less sensitive as MBRs Although the risk of film media clogging is high in case of interrupted mixing and grease leaks. Instead FBBRs are very robust and show a very good handling of either changing influent values grease leaks and interrupted oxygen supply Difficulty level of operation: MBRs require monitoring of the activated sludge process As well as back washing of the membranes in certain intervals Therefore their operation can be challenging and higher qualifications are necessary MBBRs and FBBRs are more forgiving and especially FBBRs are easy to operate required space: The higher MLSS level of MBRs allow more BOD removal per water volume Therefore there require space is lower compared to MBBRs and FBBRs Energy consumption: due to higher MLSS permanent back washing of the membrane and fouling prevention MBRs need a high air volume thus have a high energy demand and cost FBBRs require less energy as MBBRs because the air supply of the biofilm is installed directly underneath the fill media which results in a better excision intake Overall cost: The installation costs for all three systems are about the same However over time MBRs are more expensive than MBBRs and FBBRs because of higher operational and maintenance costs All in all MBRs are a good fit for applications that require a high quality water effluent instead MBBRs and especially FBBRs are a good solution for Pretreatment of high BOD levels.

Their resistance and forgiving design make them suitable for various industrial wastewater applications Additionally the easy operation and maintenance guarantee a long lifetime production solution for low cost in the long term In case you wish further information or need support with the design of your project, please contact us Thanks for watching. And if you like our three minute tutorials, please subscribe and don't forget to give a thumbs up!.

What is RFID? How RFID works? RFID Explained in Detail

Hey friends, welcome to the YouTube channel ALL
ABOUT ELECTRONICS. So, in this video we will see about the RFID technology, which is also
known as a radio frequency identification. So, in this video we will see what is RFID,
what are the different applications of the RFID, what is inside this RFID chip and how
this RFID works.So first of all, let's start with what is RFID. So,this radio frequency
identification or RFID is a technology which is working on radio frequency of radio waves.
So, this technology is used to automatically identifying the object or tracking the objects.
Now,here this object could be anything.

The objects could be books in a library or it could be
any item which are purchasing from the shopping mall or it could be the inventory
in the warehouse or maybe it could be your own car. And not only the objects but it can
be used for the tracking the animals as well as the birds. So in this RFID Technology the
RFID tag is used to get attached to the object which he wants to track.So this RFID reader
is continuously sending radio waves. So whenever this object is in the range of the reader
then this RFID tag used to transmit its feedback signal to the reader.

So it is very similar
to the technology which is used in a barcode. But in case of a barcode the object and the
scanner should be in line of sight. As This RFID technology is not a line of sight technology,
so as far as this object is within the range of the reader, object is able to identify the
reader and it is able to send the feedback single back to the reader. So using this RFID Technology
we can track even multiple objects at the same time. So, now let us see what is inside this RFID
system. So as we have already discussed , this RFID system contains two components. RFID reader
and the RFID tag. Now this RFID tags are also coming in many ways. This RFID tag could
be a active tag, it could be a passive tag or it could be a semi passive tag. Now
this passive tag do not have their own power supply. So this passive tag relies on the
radio waves which is coming from the RFID reader for the source of energy.

While in case
of a semi passive tag, they used to have their own power supply. But for transmitting the feedback
signal back to the RFID reader they used to rely on the signal which is coming from the
RFID reader. While in case of a active tag, they used to have their own power supply. But for
transmitting the signal Back To The Reader also they are relying on their own power supply.
So, as this passive tag do not have their own power supply, so the range is less compared
to the active and semi passive tags. Alright, so now first of all let's see what is inside is RFID
reader. So this RFID readers are coming in a many size and shapes.

So this RFID reader
could be your handheld reader or it could be as large as the size of the door which
normally see inside the shopping malls.So this RFID reader mainly consists of three
components. So the first component of the RFID reader is RF signal generator.So this signal
generator generates radio wave which are transmitted using this antenna. Also to receive the feedback
signal which is coming from the tag, the RFID reader also have a receiver of signal detector.
And to process the information which is being sent by the RFID tag, this RFID reader also
have microcontroller.

Or many times this RFID reader is directly connected to the computer.
So, now let us see about the RFID tags. so most of the tags, which are being used today are
passive tags. Because this passive tags are quite cheaper compared to the active tags, as well
as, as they do not require any power source, so they are quite compact. So this passive tags
are also coming in many forms.

So this passive tag could be either size of a key chain
or it could be the size of a credit card or maybe it could be in a form of a label.So
let us see what are the basic components inside this RFID tag. So, the first component
that you see inside RFID tag is a transponder. which receives the radio waves which are coming
from the reader and send the feedback signal back to the reader. As the passive tags do
not have their own supply so they rely on the radio waves which are coming from the
reader. So they used to get the energy from the radio waves which are coming from The
reader. So, using this rectifier circuit the energy that is coming from the radio waves is stored
across the capacitor.

And this energy is used as a supply for the controller as well as
the memory element inside is RFID tag. So now let us see the working principle for this
RFID system. So, before we see is working principle let us see the different frequencies at which
this RFID tags are operated. So,mainly this RFID tags are operated at 3 frequencies.The low
frequency range, the high frequency range and the Ultra high frequency range. So this frequency
range or frequency of operation varies with country to country. But majority of the countries
use to follow this frequencies for the operation for the RFID tags. So, as this low frequency
signals can travel very short distance, so the range of RFID tags which is using this
low frequency range is up to the 10 cm. So this high frequency radio waves can travel
up to the 1 m. While the Ultra High Frequency radio waves can travel much longer. So the RFID tags which are using
this Ultra high frequency can travel up to 10 to 15 Meters.

So now, let us see the working principle
for this RFID tag. Now this working principle also depends upon the frequency of operation. So, for
the low frequency and the high frequency operation the working principle is based on the inductive
coupling. While in case of a Ultra high frequency RFID tags,the working principle is based on the
electromagnetic coupling. So, first of all, let's see the working principle for this low frequency
and high frequency RFID tags. So, as I said earlier, this RFID reader continuously sending
a radio waves with a particular frequency. So now, this radio waves which is being sent
by this RFID reader serves 3 purposes. First it induces the enough power into the passive tag. Second,
it provide the synchronization clock for the passive tag. Third, it acts
as a career for the data which is coming back from the RFID tag. So, these are the basic 3
purpose which is being served by the radio waves that is sent by this RFID reader. So
in case of a low frequency and high frequency operation as a RFID reader and tag are very
close to each other, so the working principle is based on the inductive coupling.

So, the
field which is generated by this RFID reader used to get couple with the antenna of a RFID
tag. And because of this mutual coupling, the voltage will get induce across the coil of
RFID tag. Now the some portion of this voltage is getting rectified and used as a power supply
for the controller as well as the memory elements. Now, as the RFID reader is sending a radio
waves of a particular frequency, so the voltage that is induced across the coil is also of
a particular frequency. So this induced voltage is also used to derive the synchronization clock
for the controller. So now suppose if we connect a load of the coil, then current will start
flowing through this load.

And if we change the impedance of this load then the current
that is flowing through this load will also get changed. So now suppose if we switch on and
off this load and the current will also get switched on and off. So, this switching of current or rate
of change of current also generates a voltage in a RFID reader. So this switching on and
off the load is known as the load modulation. So, now suppose, if his switch on and off this
load according to the data that is stored inside this RFID tag, then that data can be
read by the RFID Reader in the form of a voltage. So, in this way using this load modulation
we are changing the voltage that is generated across RFID reader coil. And in this way we
are generating modulation on carrier frequency. So, in this was in a low frequency and high
frequency RFID tags, using this load modulation technique the data is being sent back to
the RFID reader. So now, let us see the working principle for the Ultra high frequency.

in case of Ultra high frequency, as a distance between the reader and the tag is up to few
meters, so the coupling between the reader and the coil will be the Fire field coupling.
So, this RFID reader continuously sending the radio waves of a particular frequency towards
the tag, and in response, this tag is sending a weak signal to the RFID reader. Now this weak signal
which is being sent back to the RFID reader is known as the back scattered signal. And the
intensity of this back scattered signal depends upon the load matching across the coil. So if
the load is matched exactly, then the intensity of the back scattered signal will be more.

if the load is not matched exactly then the intensity of this back scattered signal will be less. So,
in this way, by changing the condition of a load we can change the intensity of this back scattered
signal. And if we change the condition of a load according to the data that is being stored
across this RFID tag, then that data can be sent back to the RFID reader. So, in this way,
RFID reader is able to sense that data. Now in case of a powerful coupling, as the distance
between the RFID reader and Tag is few meters, so the initial signal which is being
sent by The Reader should be strong, so that the back scattered signal can be retrieved
by this RFID reader. So this is how in a case of a far field coupling, the signal is sent
back to the RFID reader using this back scattered modulation technique .So now, let us see the
different applications of this RFID Technology. So, this RFID is used in a wide range of applications,
and many of the applications we have already discussed.

So, here I am listing the few applications
in which this RFID system is used. I hope in this video you understood what is this RFID
technology and how this RFID Technology is working. So, if you have any question or suggestion
please let me know in the comments section below. If you like this video, hit the like
button and subscribe the channel for more such videos..

Primitive Technology: Pit and chimney furnace

Demolition of the old furnace Dig a deep pit The furnace is built below the ground, about 25 cm square Furnace grid Flue Furnace door brick Building a chimney 2 meter high chimney Burning charcoal Unburned charcoal wood can be left for the next treatment Available charcoal (higher burning temperature than ordinary wood) Iron bacteria Burning grass ash (reducing the melting point of iron ore) Mix iron ore, charcoal powder and grass ash into a cylindrical block Slag falls through the grate Remove the molten cylindrical block Granular pig iron Granular pig iron You can see the rust and prove that it is indeed iron. Furnace with deep pit and chimney .

Les 10 technologies automobiles pour votre sécurité – Hybrid Life

10 Car Security technologies Automatic brake assist Predictive detection with millimetre-wave radar and single or stereoscopic lens camera Visual and acoustic warning signals Automatic full brake application Rear trafic alert Obstacle detection with microwave radars housed within the rear bumper It measures the approach time of vehicles Lane assist Single lens camera coupled with a millimetre-wave radar Assists the driver with steering to keep the vehicle running in the middle of its lane Assists in keeping the vehicle running within its lane even on curves Even when white lines are not visible Lane departure warning Automatic High Beam This system uses a single lens camera in the interior mirror Precollision system (vehicles, cyclists, pedestrians) This system uses a single lens camera located on the inside rear-view mirror and a millimetre-wave radar (or a laser in some vehicles) In case of danger, it activates the emergency brake Detection now distinguishes both bicyclists and pedestrians day and night Blind Spot information System This system uses microwave radars located in the rear bumper Airbags and body rigidity Body rigidity with use of high-strength steels and programmed deformation zone The airbags are triggered between 30 and 150ms
after impact to protect passengers Driver Attention Alert The DAA (Driver Attention Alert) monitors the steering angle given by the driver using direction angle sensors during a driving period to establish a baseline Then it compares the driving modes to the reference level using statistical analysis of steering correction errors correlated with secondary data such as speed, time and air conditioning settings.

Electronic Stability Program When you drive several forces act on your vehicle you probably feel them in your daily behaviour when you take a curve When you turn the wheel, the car does not move only to one side It actually moves around a vertical axis Under certain conditions, the car may turn a little too much or not enough Like when you move away to avoid an obstacle by taking a curve too fast or meet a slippery road that's why the ESP (Electronic Stability Program) was designed It's a set of sensors that monitor continuously how the car behaves according to the driver's instructions The ESP compares the steering wheel information with the speed of each wheel In a few milliseconds, the ESP is able to to know if the car is not following the expected curve.

When you move quickly to avoid an obstacle the car will tend to go straight ahead. The ESP will brake the inner wheel, which will involve a rotational force to answer more precisely to the driver's orders. The ESP will brake the front wheel to help rotation. To get the car back on track. If necessary, the ESP will reduce the engine power. This is to restore control more quickly..

DeepMind’s AlphaStar Beats Humans 10-0 (or 1)

Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. I think this is one of the more important
things that happened in AI research lately. In the last few years, we have seen DeepMind
defeat the best Go players in the world, and after OpenAI’s venture in the game of DOTA2,
it’s time for DeepMind to shine again as they take on Starcraft 2, a real-time strategy
game. The depth and the amount of skill required
to play this game is simply astounding. The search space of Starcraft 2 is so vast
that it exceeds both Chess, and even Go by a significant margin. Also, it is a game that requires a great deal
of mechanical skill, split-second decision making and we have imperfect information as
we only see what our units can see. A nightmare situation for any AI. DeepMind invited a beloved pro player, TLO
to play a few games against their new StarCraft 2 AI that goes by the name AlphaStar. Note that TLO a profesional player who is
easily in top 1% of players, or even better. Mid-grandmaster for those who play StarCraft

This video is about what happened during this
event, and later, I will make another video that describes the algorithm that was used
to create this AI. The paper is still under review, so it will
take a little time until I can get my hands on it. At the end of this video, you will also see
the inner workings of this AI. Let’s dive in. This is an AI that looked at a few games played
by human players, and after that initial step, it learns by playing against itself for about
200 years. In our next episode, you will see how this
is even possible, so I hope you are subscribed to the series. You see here that the AI controls the blue
units, and TLO, the human player plays red.

Right at the start of the first game, the
AI did something interesting. In fact, what is interesting is what it didn’t
do. It started to create new buildings next to
its nexus, instead of building a walloff that you can see here. Using a walloff is considered standard practice
in most games, and the AI used these buildings to not wall off the entrance, but to shield
away the workers from possible attacks. Now note that this is not unheard of, but
this is also not a strategy that is widely played today and is considered non-standard.

It also built more worker units than what
is universally accepted as standard, we found out later that this was partly done in anticipation
of losing a few of them early on. Very cool. Then, almost before we even knew what happened,
it won the first game a little more than 7 minutes in, which is very quick, noting that
in-game time is a little faster than real-time. The thought process of TLO at this point is
that that’s interesting, but okay, well, the AI plays aggressively and managed to pull
this one off.

No big deal. We will fire up the second game, in the meantime,
few interesting details. The goal of setting up the details of this
algorithm was that the number of actions performed by the AI roughly matches a human player,
and hopefully it still plays as well, or better. It has to make meaningful strategic decisions. You see here that this checks out for the
average actions every minute, but if you look here, you see around the tail end here that
there are times when it performs more actions than humans and this may enable playstyles
that are not accessible for human players. However, note that many times it also does
miraculous things with very few actions. Now, what about an other important detail,
reaction time? The reaction time of the AI is set to 350ms,
which is quite slow. That’s excellent news because this is usually
a common angle of criticism for game AIs.

The AI also sees the whole map at once, but
it is not given more information than what its units can see. This perhaps is the most commonly misunderstood
detail, so it is worth noting. So, in other words, it sees exactly what a
human would see if the human would move the camera around very quickly, but, it doesn’t
have to move the camera, which adds additional actions and cognitive load to the human, so
one might say that the AI has an edge here. The AI plays these games independently, what’s
more, each game was played by a different AI, which also means that they do not memorize
what happened in the last game like a human would.

Early in the next game, we can see the utility
of the walloff in action which is able to completely prevent the AIs early attack. Later that game, the AI used disruptors, a
unit, which if controlled with such level of expertise, can decimate the army of the
opponent with area damage by killing multiple units at once. It has done an outstanding job picking away
at the army of TLO. Then, after getting a significant advantage,
AlphaStar loses it with a few sloppy plays and by deciding to engage aggressively while
standing in tight choke points. You can see that this is not such a great
idea. This was quite surprising as this is considered
to be StarCraft 101 knowledge right there. During the remainder of the match, the commentators
mentioned that they play and watch matches all the time and the AI came up with an army
composition that they have never seen during a professional match. And, the AI won this one too. After this game it became clear that these
agents can play any style in the game.

Which is terrifying. Here you can see an alternative visualization
that shows a little more of the inner workings of the neural network. We can see what information it gets from the
game, a visualization of neurons that get activated within the network, what locations
and units are considered for the next actions, and whether the AI predicts itself as the
winner or loser of the game. If you look carefully, you will also see the
moment when the agent becomes certain that it will win this game. I could look at this all day long, and if
you feel the same way, make sure to visit the video description, I have a link to the
source video for you. The final result against TLO was 5 to 0, so
that’s something, and he mentioned that the AlphaStar played very much like a human
does and almost always managed to outmaneuver him. However, TLO also mentioned that he is confident
that upon playing more training matches against these agents, he would be able to defeat the

I hope he will be given a chance to do that. This AI seems strong, but still beatable. I would also note that many of you would probably
expect the later versions of AlphaStar to be way better than this one. The good news is that the story continues
and we’ll see whether that’s true! So at this point, the DeepMind scientists
said that “maybe we could try to be a bit more ambitious”, and asked “can you bring
us someone better”? And in the meantime, pressed that training
button on the AI again.

In comes MaNa, a top tier pro player. One of the best Protoss players in the world. This was a nerve-wracking moment for DeepMind
scientists as well, because their agents played against each other, so they only knew the
AI’s winrate against a different AI, but they didn’t know how they would compete
against a top pro player. It may still have holes in its strategy. Who knows what would happen? Understandably, they had very little confidence
in winning this one. What they didn’t expect is that this new
AI was not slightly improved, or somewhat improved. No, no, no. This new AI was next level. This set of improved agents among many other
skills, had incredibly crisp micromanagement of each individual unit. In the first game, we’ve seen it pulling
back injured units but still letting them attack from afar masterfully, leading to an
early win for the AI against Mana in the first game.

He and the commentators were equally shocked
by how well the agent played. And I will add that I remember from watching
many games from a now inactive player by the name MarineKing a few years ago. And I vividly remember that he played some
of his games so well, the commentators said that there’s no better way to put it, he
played like a god. I am almost afraid to say that this micromanagement
was even more crisp than that. This AI plays phenomenal games. In later matches, the AI did things that seemed
like blunders, like attacking on ramps and standing in choke points, or using unfavorable
unit compositions and refusing to change it and, get this, it still won all of those games
5 to 0.

Against a top pro player. Let that sink in. The competition was closed by a match where
the AI was asked to also do the camera management. The agent was still very competent, but somewhat
weaker and as a result, lost this game, hence the “0 or 1” part in the title. My impression is that it was asked to do something
that it was not designed for, and expect a future version to be able to handle this use
case as well. I will also commend Mana for his solid game
plan for this game, and also, huge respect for DeepMind for their sportsmanship. Interestingly, in this match, Mana also started
using a worker oversaturation strategy that I mentioned earlier. This he learned from AlphaStar and used it
in his winning game. Isn’t that amazing? DeepMind also offered a reddit AMA where anyone
could ask them questions to make sure to clear up any confusion, for instance, the actions
per minute part has been addressed, I’ve included a link to that for you in the description. To go from a turn-based perfect information
game, like Go, to a real time strategy game of imperfect information in about a year sounds
like science fiction to me.

And yet, here it is. Also, note that DeepMind’s goal is not to
create a godlike StarCraft 2 AI. They want to solve intelligence, not StarCraft
2, and they used the game as a vehicle to demonstrate its long-term decision making
capabilities against human players. One more important thing to emphasize is that
the building blocks of AlphaStar are meant to be reasonably general AI algorithms, which
means that parts of this AI can be reused for other things, for instance, Demis Hassabis
mentioned weather prediction and climate modeling as examples. If you take only one thought from this video,
let it be this one. I urge you to watch all the matches because
what you are witnessing may very well be history in the making. I put a link to the whole event in the video
description, plus plenty more materials, including other people’s analysis, Mana’s personal
experience of the event, his breakdown of his games and what was going through his head
during the event. I highly recommend checking out his 5th game,
but really, go through them all, it’s a ton of fun! I made sure to include a more skeptical analysis
of the game as well to give you a balanced portfolio of insights.

Also, huge respect for DeepMind and the players
who practiced their chops for many many years and have played really well under immense
pressure. Thank you all for this delightful event. It really made my day. And the ultimate question is, how long did
it take to train these agents? 2 weeks. Wow. And what’s more, after the training step,
the AI can be deployed on an inexpensive consumer desktop machine. And this is only the first version. This is just a taste, and it would be hard
to overstate how big of a milestone this is.

And now, scientists at DeepMind have sufficient
data to calculate the amount of resources they need to spend to train the next, even
more improved agents. I am confident that they will also take into
consideration the feedback from the StarCraft community when creating this next version. What a time to be alive! What do you think about all this? Any predictions? Is this harder than DOTA2? Let me know in the comments section below. And remember, we humans build up new strategies
by learning from each other, and of course, the AI, as you have seen here, doesn’t care
about any of that. It doesn’t need intuition and can come up
with unusual strategies. The difference now is that these strategies
work against some of the best human players. Now it’s time for us to finally start learning
from an AI. gg. Thanks for watching and for your generous
support, and I'll see you next time!

Why Don’t We Have Self-Driving Cars Yet?

This is Volvo's 360c concept car, and it's just one idea of what
completely driverless cars might look like one day. That means cars without
even a steering wheel that can safely navigate public roads entirely on their own. But with how much we hear
about self-driving technology making its way into everyday
cars, it's hard not to wonder: How much longer do we have to wait? Understanding just how far we've come with self-driving technology
can be a bit tricky. To help define how sophisticated the automated technology actually is, the Society of Automotive Engineers classifies these systems
using five levels. Level 1 is driver assistance, where the vehicle is able to
control steering or braking but not both simultaneously. Level 2 is partial automation, where the car can assist with both steering and braking simultaneously, but your attention is required
on the road at all times. Both Tesla's Autopilot and
General Motors' Super Cruise are examples of this. Level 3 is conditional automation, where certain circumstances allow the car to handle most aspects of driving and the driver has the
ability to temporarily take their eyes off the road.

Level 4 is high automation, where, in the right conditions, the car can take full control, giving the driver a chance
to focus on other tasks. And Level 5 is full automation. In this hypothetical
situation, the car drives you, and there isn't even a steering wheel. So, what level are we currently at? Most experts would agree: somewhere between Levels 2 and 3. However, one of their biggest concerns is the public's misconception
that we're much further along. Bryan Reimer: There's an
incredible amount of confusion in the general public around
the context of self-driving. In our survey data here, about
23% of respondents believe that a self-driving vehicle is
available for purchase today. And a lot of that has to do
with statements by Elon Musk and others talking about
the driverless capabilities and the self-driving
capabilities of vehicles.

These are systems that are
made to assist the driver under the supervision of a driver. Narrator: So, is it simply the limits of these automated systems
that's holding us back? Actually, there are a number
of other factors in the way. For starters, our roads. Simply put, many roads,
especially in the United States, are too much of a mess to support cars that can drive by themselves. Reimer: So, while many
individuals out there are really working on the development of self-reliant automation, in essence, a robot that's fully capable
of making its own decisions in today's infrastructure, the reality is, today's infrastructure is not
well equipped for autonomy. In essence, potholes, poor lane markings, and all the other crumbling aspects of our nation's infrastructure aren't going to support high-tech well.

Narrator: In addition to more public roads needing signs and lane markings that self-driving cars
can clearly make out, vehicles need to be wirelessly connected with that traffic infrastructure,
as well as one another, in order to interact with the
world around them flawlessly. Fortunately, automakers like
Volvo already have technology that allows their cars to
communicate with each other and alert drivers of hazards
via a cloud-based network. This type of connected technology is being tested even further within driverless cars at Mcity, a 32-acre mock city and testing facility at the University of Michigan. Greg McGuire: So, what
are connected vehicles? When we say "connected" at Mcity, we're really referring
not to streaming Netflix into your passenger seat so much, that's a pretty solved
problem in the industry, but in how vehicles and infrastructure can be connected together for lots of other benefits, like safety. The idea is a low-latency way for vehicles to tell other vehicles and anything else that wants to listen where they are and where they're going.

Narrator: So, once traffic infrastructure and communication is handled, what else do we need to address? Well, traffic laws. Governments have a number of
important decisions to make in society's transition
to self-driving vehicles. In the beginning stages,
they'll have to define what weather conditions are appropriate for vehicles to be operating
fully autonomously. This is due to the fact that
many of these car systems can be disrupted by rain and snow. One industry they could
look to for guidance is the airline industry, who doesn't hesitate to cancel
flights in inclement weather. They'll also have to initially find a way for autonomous vehicles to
safely navigate public roads amongst traditional cars.

A possible solution could
be designated lanes, similar to the
high-occupancy-vehicle lanes found on highways and bus lanes found in certain cities. Ayoub Aouad: The government's
kind of leaving it up to states to decide what's going on, just because the technology's so new and they still
don't really understand what it's going to look like in the end. Once the government
does fully get involved, the federal government, they're gonna have to speak to lobbyists, people that represent truck
drivers and taxi commissions. And they're gonna realize that, you know, a lot of jobs could be lost, and that's going to be difficult. And then, also, liability. If these cars are on the roads and they're getting into
accidents, like, who is liable? Narrator: With all of
these things considered, back to our original question: How soon until we have self-driving cars? Aouad: I'd say within the decade
it's gonna be on highways, but if we're talking about being able to take your car wherever you
want across the United States, being able to travel through New York City and sleep the whole time, I don't think we're
anywhere close to that.

Probably several decades away from that. Reimer: You know, car
makers and tech companies are very heavily focused on the context of driverless technologies. Now, I'm not saying that
that's not the future. It is the future. But, as many have begun to admit publicly, that future is further away than anybody's realistically considered to date. We as humans are really good
at predicting the future; we're not so good at the timelines. And the timelines to driverless technology changing how I live and move is probably in the order
of several decades, if not further away.

McGuire: How close are
we to the Jetson's car? We're still a ways away, in my opinion. It isn't really a matter
of when these technologies will arrive, to me, but can we be ready and utilize them in the best way possible..

Advanced Technologies Academy teacher receives ‘Nevada’s Teacher of the Year’ honor