Nobody can tell us what the future will hold, that much is always true, but what we do know is that technology will help us get there and achieve our dreams. The stuff of science-fiction has been made real by high-tech pilots like these. Here’s what will change the world.
Future image via Shutterstock
Molecular machines are nano-scale assemblers that construct themselves and their surroundings into ever more complex structures. Sometimes dubbed “nanotech” in the media, these devices are promising — but also widely misunderstood. Here’s what separates the science fact from science fiction.
The concepts that underpin this form of nanotechnology have certainly had long enough to percolate through modern science. Richard Feynman first speculated about the idea of “synthesis via direct manipulation of atoms” during a talk called There’s Plenty of Room at the Bottom. Looking back, that sparked much of the subsequent thinking about treating atoms and molecules more and more like simple building blocks.
Perhaps most famously, K. Eric Drexler considered the idea of taking the bottom-up manufacturing approach to its atomic extreme in his 1986 book Engines of Creation: The Coming Era of Nanotechnology. There, he posited the idea of a nan-oscale “assembler” that could scuttle around, building copies of itself or other molecular sized objects with atomic control; one which might in turn be able to create larger and more complex structures. A kind of microscopic production line, building products from the most basic ingredients of all. Coming when it did, in the mid-eighties, it felt very much like science fiction.
Truth is that scientists have been very busy indeed over those past thirty years, creating a host of molecular-sized structures that can manipulate and assemble themselves, move, and even work together. It’s not always easy, of course — building at the molecular levels requires atomic accuracy — but mercifully chemistry and physics has advanced to a point where it’s increasingly possible. And there’s a rich pool of molecular machines, some inspired by nature, others by mechanical engineering principles, to show for it.
Nanotechnology is the future. By making things smaller, smarter and faster, we can enable a whole mess of things. From cheaper and gruntier computer processors right through to a cure for cancer.
In terms of available products, Intel’s Ivy Bridge CPUs feature the smallest process at 22 nanometres. However, while the International Technology Roadmap for Semiconductors (ITRS) reckons we’ll hit 14nm by 2014 and 10nm by 2016, it’s getting progressively harder to achieve these milestones.
Thankfully, there are clever folk at organisations like CSIRO working eternally on the problem.
In collaboration with MIT and the University of California — Los Angeles in the US, as well as RMIT and Monash University in Victora, CSIRO has come up with a way to use molybdenum trioxide as a conductive nano-material. By using layers of the crystalised oxide, researchers were able to pump electrons through the material at “ultra-high” speeds with minimal scattering, the benefits of which are explained below:
RMIT’s Professor Kourosh Kalantar-zadeh said the researchers were able to remove “road blocks” that could obstruct the electrons, an essential step for the development of high-speed electronics. “Instead of scattering when they hit road blocks, as they would in conventional materials, they can simply pass through this new material and get through the structure faster,” Professor Kalantar-zadeh said.
“Quite simply, if electrons can pass through a structure quicker, we can build devices that are smaller and transfer data at much higher speeds.
According to CSIRO’s press release on the development, there’s still a ways to go before the technology we’ll make its way into regular gadgets, but at least it’s a start.
Believe it or not, the high-resolution camera in your smartphone doesn’t cost that much on its own. What if we could use these cheap cameras in cars to let them see the road and make decisions to keep the driver safe. With cheap sensors and cameras, cars can become more aware of their environment.
Say a car had 20 connected cameras around the exterior of the vehicle, and a cloud-connected machine learning engine inside it which recognises obstacles, signs and even obscured pedestrians. It’s less about creating a self-driving car, and more about creating a smart car.
Jen-Hsun gave an example of a parent turning around from the driver’s seat to face the rear seat for a moment in a bid to scold some naughty kids. In those seconds, another car stopped suddenly in front of the vehicle, and there was no way the parent driver could stop in time to avoid an impact. What a DrivePX-enabled car could do is, in milliseconds, identify that the lane adjacent to it is empty, and quickly change lanes to perform an emergency stop and avoid a collision entirely. It’s able to do that because it thinks and reacts faster than a person ever could.
Auto-pilot has been a reality for years in commercial planes, but what about us land-lovers who get from point-A to point-B with our cars? Tesla is currently rolling out the feature into its Model S vehicles that allow it to practically drive itself.
It’s a series of ultrasonic sensors (12 to be exact) attached to the car which can see everything within five metres of the vehicle in all directions. There’s also a forward-facing radar and a forward-facing camera to sense traffic in front of you and lock onto it. You also get a new smart braking system to stop you in your tracks if anything goes wrong in front. What it does is give you the ability to follow traffic around at any speed for a smooth auto-acceleration and auto-braking experience. You still have to steer, of course, but that’s to be expected.
In Layman’s terms? It’s cruise control 2.0.
We’ve experienced something similar on Audi vehicles before, but it’s never been as smooth or easy to use as it is on the Model S.
To activate the Autopilot, there’s a stalk on the left hand side of the steering wheel, underneath the indicator stalk. Push it down once and it keeps you at your current speed. Flicking the stalk up while the Autopilot is active increases your speed by 5km/h per flick, while depressing the stalk reduces it by the same amount. Activating Autopilot sees the car’s network of sensors fire up to track your location on the road, and more importantly, the location of other cars around you. It then “locks on” to the car in front and matches speed, acceleration and deceleration so you always maintain a consistent distance from its back bumper.
You can tell the system to keep a distance of anything from one car length up to seven car lengths. I imagine you’d only need seven car lengths if you were covertly surveilling someone, and if that’s the case, get a less conspicuous car.
Unlike other laser-guided cruise systems we’ve driven, the radar-guided, sensor-enabled Tesla Autopilot feature “sees” further ahead and almost anticipates the movement of traffic at speed.
Chip maker NVIDIA is also working on a solution. Nvidia wants to put the X1 to work to help be the brains of future driverless cars. Nvidia’s also announced something called the Drive PX, an “auto-pilot car computer” that’s powered by two X1s. The point of this chip? To know everything that’s going on in and around your car, from what’s displayed on its screens, to anything coming in from driver assistance cameras that are facing outwards. It’s the brain that makes sense of what’s coming in through the cars’ many eyes, that lets it really learn about and understand its surroundings using neural network technology that can teach itself what cars and vans and cyclists and pedestrians look like over time.
Sounds great, right? Hell yeah. The catch is that it’s still a long, long way off. The X1 is a chip with the horsepower to make this sort of stuff possible, sure, but cars still have a lot of catching up to do, whether it’s by including a ton of high-res panels that will show you all those awesome Tron graphics, or by having a bevvy of outward facing cameras that provide all the information something like an “auto-pilot car computer” would want to process. And that’s to say nothing of the challenges of getting this tech — and this tech specifically — into cars; everybody is working a self-driving car these days.
The tech works now though. Nvidia’s already got prototypes of this tech that are functional, and by extension, cars with brains smart enough to spot cyclists and pedestrians and other squishy things it best not hit, or to realise that a bunch of brakelights up ahead means that it should probably get ready to start slowing down. Nvidia claims it’s building a neural net of learnings that it can wirelessly sync to every other Nvidia-powered car simultaneously to make them all smarter and smarter.
3D Printing At Scale
3D printing has been disrupting the manufacturing and prototyping space for ages, so how will it evolve into the future? It all comes down to scale.
We’ve seen 3D printers used for everything from iPhone cases to makeshift weapons, but if you think bigger, what can these new printers really be used for? Could you really make your own house with a 3D printer in less than 20 hours? Turns out you can, and the technology is now set to be used by NASA for a future Moon colony.
The man behind this ambitious housing project is Professor Behrock Khoshnevis, and he’s disgusted that in the 21st century, the world is still ridden with poverty-stricken slums characterised by make-shift corrugated iron shacks. He wanted to find a way to improve the basic concept of house construction so that it was accessible to everyone, because with better shelter comes a more civilised society.
To build a house right now, you’re looking at a slow, labour-intensive, dangerous process that’s almost always over-budget. Professor Khoshnevis said that housing construction is one of the only industries that still does things manually, unlike the motoring or technology industries for example that use automated production methods to complete routine construction tasks.
So how do you fix a slow, expensive housing concept that has been set in stone for the last few centuries so that everyone around the world can get access to it? That’s easy, Professor Khoshnevis says. You use 3D printing.
Khoshnevis is heavily involved in computer-aided design (CAD), robotics and rapid prototyping with the University of Southern California, and he’s using that experience to scale up 3D printing so that it can be used in housing construction.
“I name this process Contour Crafting, which is essentially a way of streamlining the process by benefiting from the experience we have gained in the domain of [automated and technology-assisted] manufacturing,” he told TEDxOjai attendees earlier this year.
Khoshnevis wants to build entire neighbourhoods with Contour Crafting, and he claims it can be done at a fraction of the cost in a smaller block of time.
As far as expenses go, the materials for the 3D printed house are projected to cost 25 per cent less than traditional houses and labour costs can be cut in half. In terms of timing from start to finish, Khoshnevis said that “we anticipate that an average house, like 2500 square foot house, can be built in about 20 hours from a custom design”.
Here’s how it works. A CAD design is sent to a large-scale 3D printer that is mounted to a block of land. The printer lays out the concrete-like foundation of the home through a nozzle that can move anywhere on the property. Like any 3D print-out, the house is made layer-by-layer and reinforced with various materials — like electrical, plumbing and communication infrastructure — as the build progresses.
The concrete used is a mixture of concrete and fibre polymers, meaning that it is more than three times stronger than traditional concrete used in today’s houses. The concrete that goes into your house right now can withstand roughly 3000 pounds per square inch of pressure, while the new printed concrete can withstand around 10,000 pounds per square inch.
The best thing about the construction process, Khoshnevis added, is that it can print out any house design you like. Curved walls? No problem. Water feature in your front yard? Can do. Custom tile design and a few feature walls? Simple, thanks to the addition of laser jet printer nozzles attached to the printing array.
As if that’s not impressive enough, Professor Khoshnevis’ concept is currently being supported by NASA so that the space agency can one day be used to build a colony on the Moon. You read it right, the Moon.
This is the future I want to live in. One where I can print out my own house in less than a day, for half the cost, using the same technology NASA is using on the moon.
The robotics industry is finally starting to catch up with Hollywood effects. Robots are a reality, and they have a vast number of applications in our modern world.
A humanoid robot isn’t about to serve you a drink any time soon, but we do have robots already that will send chills down your spine. Google is leading the charge when it comes to developing robots for the world.
Google can’t stop buying robotics companies. In the past two months, eight of the 12 companies the search giant has acquired have “robotics” in their name or descriptions. Here’s your complete breakdown of the robot army presently at Google’s command.
As stated in the midst of its buying spree, the company’s largely letting its new robotics divisions continue to work on their – projects, and why wouldn’t they? The newly acquired companies are doing a damn good job. They’re even winning competitions.
Robot technology would help with self-driving cars, certainly, but the range of these acquisitions hints at even broader ambitions. Again, we don’t know much. They’re all a part of the Google X division, which is top secret by definition. We do know what the new companies in the Google family are up to, though, and that might offer us some clues.
These guys are rockstars. The Japanese team that got its start at Tokyo University just took the top prize at DARPA’s Robotics Challenge Trial thanks to the cunning and agility of its 165cm, 95kg bipedal robot. After being purchased by Google in early December 2013, Schaft’s blue machine proved to be the best at walking on uneven terrain, climbing ladders, clearing debris, and connecting hoses, ultimately scoring an impressive 27 out of 32 possible points.
The company was originally founded to build disaster response robots after the Fukushima nuclear disaster in 2011 but has since broadened its scope, thanks in part to funding from the US government. Who knows how far they’ll go floating on Google’s coffers?
Industrial Perceptions is an imaging company that spun off of the Menlo Park robotics company Willow Garage. Before being acquired by Google in December — the day after the Schaft acquisition, in fact — IPI was focusing on building advanced technology for 3D vision-guided robots to be used in manufacturing and logistics. This includes the ability to see and sort different objects, say, in a factory. You could imagine a company like Amazon being very interested in this kind of technology, but it’s so far unclear exactly what Google wants to do with it.
Redwood Robotics started as a joint venture between Meka Robotics, SRI International, and Willow Garage, IPI’s parent. And like IPI, it’s always had a very focused mission. Redwood wants to build the “next generation arm” for robots. Meka Robotics founder Aaron Edsinger once said that he wants to do for robotic arms what the Apple II did for computers. Specifically, Redwood wants to build robotic arms that can work alongside people even in the comfort of their own home. That also means being the common arm manufacturer of service robots, so in the future, everybody’s personal robot could have Redwood arms. Well, make that Google arms.
Like its cousin, Redwood Robotics, Meka is dedicated to building robots that can live and work with human beings. The company describes its flagship model, the M1 Mobile Manipulator, as having “human-safe, human-soft and human scale robot technologies that will enable the robots of tomorrow to work alongside people in the home and the workplace.” The human-like faces on the robot can even emote, a feature that’s as creepy as you let it be.
Even before joining Google, Holomini was a pretty secretive outfit. All we really know from its now shuttered website is the company describes itself as “Creators of high-tech wheels for omnidirectional motion.” The image above is just a stock photo guesstimate of what a “high-tech wheel for omnidirectional motion” might look like.
Bot & Dolly
If Redwood and IPI are the engineers in the family, Bot & Dolly are the artists. The company describes itself as “a design and engineering studio that specialises in automation, robotics, and filmmaking” with a mission “to advance motion control and automation as a creative medium.” In reality, this means that Bot and & Dolly use robots to help film commercials and movies like Gravity. This doesn’t mean that Google wants to get into the movie business, but hey, if a robot’s good enough to make a movie, what else can it do?
Boston Dynamics is the real celebrity of the bunch. After acquiring six robotics companies in six days, Google took a couple of days off before announcing this major acquisition. The company is known for building all kinds of futuristic bots from the bipedal, humanoid robot Atlas (above) to the impossibly fast, four-legged Cheetah. Actually, Boston Dynamics brings a whole robot army to Google, one that the military is very eager to recruit.
Google’s latest purchase is less interested in building an actual robot than in designing an intelligent robot brain. The self-described “cutting edge artificial intelligence company” that uses “the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms: comes with a team of 75 researchers and software engineers whose talents could be put to use on anything from the hypothetical Googlebot to the company’s flagship search engine and anything in between. Because after all, robots are just another step in Google becoming the company that is everywhere, and does everything.
Renewable energy for the consumer was long thought to be expensive and difficult to deploy, but with new developments from Tesla, that’s about to change. Tesla Energy wants the world to be cleaner, and it’s doing so by letting homes, businesses and power companies store their electricity better and reduce peak load on the grid. Here’s the best part — we’ll get the battery in Australia too, at an incredibly cheap price. It’s called Powerwall.
The Powerwall is Tesla Energy’s battery for homes. It’s available in two sizes — 7kWh and 10kWh, for either $3000 or $3500 US dollars to installers — and is basically an oversized uninterruptable power supply. That price is awesome, and should make the batteries affordable even after installers add their fees and overheads.
Those prices are amazing, by the way. They mean Tesla is able to produce batteries for around $250/kWh, where competitors that are already in Australia cost $1000/kWh. This will be (and I don’t use this word freely) a gamechanger for energy storage in Australia once they are available. At the moment, information on Australian integrators is light on the ground, although Canberra-based Reposit Power has apparently teamed up with Tesla to bring the Powerwall to Australia.
The Powerwall has two key purposes and one complementary benefit. It’ll connect to both the existing power grid and your solar panel setup if you have one, either storing the energy your solar panels receive for later use or charging itself from the energy grid in times of cheapest off-peak power (for most users, that’s overnight). It’ll also provide backup power in the case of an outage.
That way, the Powerwall provides a baseline of power to your house during times of peak power cost, but will either provide that power effectively for free (off solar) or at the cheapest possible grid rate (by charging off-peak). The Powerwall battery is rated to 2kW of continuous power and 3kW of peak electricity draw — enough to handle the basic needs of a small household’s appliances, lighting and devices.
The Powerwall comes in a bunch of colours, and is designed to be placed on your wall and be visible rather than hidden away. You can stack the batteries, too — anything from one to two to up to nine Powerwalls can be stacked for up to 90kWh of energy storage (at US$3500 a pop). US residents can order the Powerwall now, with shipping in three to four months. Batteries will initially be produced in Tesla’s Fremont factory in California and then production will move to the Gigafactory next year.
The Powerwall isn’t going to reduce your household’s grid energy usage to zero — it’s not a big enough battery and would also require significant investment in solar to charge completely — but it will reduce peaks in grid electricity reliance, letting Powerwall users charge overnight instead of in the day time when everyone else is using the network and increasing demand.
And because of that shifting of load, it will reduce the world’s need for peak power generation, theoretically reducing the need for dirty power sources like fossil fuels. Australia relies heavily on comparatively dirty coal and gas power generation for baseline and peak power demand, although investment in solar, wind, wave and geothermal energy is increasing — it’s Tesla’s goal to allow these to contribute more and therefore make things cleaner.
Tesla has confirmed to Gizmodo that the Powerwall will begin sales and installation in Australia in the first quarter of next year, although prices are still to be confirmed. Providers of the technology will be confirmed closer to that launch date.
The world is getting smaller thanks to broadband getting faster. Putting aside the National Broadband Network for a second, wireless is where we’ll see the biggest need for speed in the next five to 10 years.
We first found out about Telstra 4GX way back in October of last year, but as of January 1 this year it’s finally happening. The Telstra 4GX 700MHz rollout has officially kicked off around metropolitan and regional areas around the country, and the biggest Aussie telco has some big plans for the new network. 4GX covers a much wider area than Telstra’s existing 900MHz and 1800MHz networks, so if you’re a regular long-distance commuter or traveller then Telstra should remain your number one choice.
4GX is fast, too. Telstra bought a big chunk of 700MHz spectrum, more than it owns of any other frequency spectrum, and what that means is faster downloads and uploads and lower lag. To use 4GX, though, you’ll need a compatible smartphone. Both the Apple iPhone 6 and iPhone 6 Plus support the 700MHz 4G frequency that Telstra’s 4GX is based on, as does almost any new mid- or high-end smartphone or tablet like the Sony Xperia Z3 or LG G3 or Samsung Galaxy S5. That means if you’ve bought a smartphone already within the last year or so, and you’re a metropolitan Telstra customer, there’s a pretty good change you’ll already be running on 4GX when you’re in any built-up area.
4GX isn’t the end of Telstra’s plans for this year in mobile data, though. It has switched on 4G Advanced (or LTE-Advanced, or carrier aggregation) for any site that has 4GX switched on, so if you have a brand new smartphone like the Huawei Ascend Mate7 or the Samsung Galaxy Note 4, you will get ridiculously fast 4G download speeds — we’re talking in the region of 150Mbps, three times as fast as any other Aussie network.
Optus is in the middle of a massive 700MHz 4G rollout across the country, the mobile network spectrum it paid $650 million for in 2013. Australia’s second largest telco is calling its next-gen network 4G In More Places — it’s a bit of a wordy title, but it gets the point across that you’ll be able to use your smartphone or hotspot or tablet in more places on Optus’ fast 4G.
Head over to Optus’s mobile data network coverage tracker and you’ll see a bunch of green and purple splotches to represent the 3G and 4G data coverage in your area. Click on that 3 Month or 6 Month checkbox to show off the 4G data network expansion currently taking place around the country, and you’ll see a new red area — that’s the future 4G In More Places. It covers a far wider area than the current 4G
Optus’ 4G In More Places coverage rollout is still ongoing, but as the year progresses it’ll get larger and larger — you might just wake up one morning and find your smartphone blazing through those Facebook posts and Twitter updates. By mid-year, you’ll notice the Optus network speeding along, as long as you have a supported 4G 700MHz smartphone. Since you’ll only find 700MHz on relatively new phones, that’s a great reason to upgrade.
Vodafone in 2015 is all about the low-band. We’re specifically talking about the 850MHz frequency that Vodafone used to use for its 3G network, but has now re-farmed partially to offer extra 4G speed and distance. The promise that Vodafone made was that by the end of 2014, it would cover 95 per cent of Australia’s metropolitan population with its new 4G network, so if you’re in any major city around Australia your phone should be already switching to 4G 850MHz wherever possible.
In reality, 850MHz isn’t really about speed — although there will be an element of that, since it’s extra bandwidth and capacity on the fastest possible mobile network that Voda is running. It’s about coverage, and since 850MHz is a relatively low-frequency band of the mobile telecommunications spectrum, its wavelength means it has far superior in-building penetration compared to Vodafone’s existing 4G.
You will see better speeds, though, as long as you’re on a device that supports the 850MHz 4G band. Vodafone’s own Pocket Wi-Fi 4G hotspot doesn’t support the band, and neither does its 4G dongle, but basically any modern smartphone includes 4G 850MHz, like the Sony Xperia Z3, LG G3 or Samsung Galaxy S5 — certainly more smartphones, and cheaper ones too, than support the 700MHz frequency used by Optus and Telstra’s new networks.
Low-band 4G means that if you’re a Vodafone customer, you’ll see that little 4G symbol in more places around the city or suburb that you live in. Extra range and high-speed coverage is a very good thing, and given that has been a valid criticism of the company in the past, this should be the year that Vodafone kicks a lot of goals.
Did you know that a freakishly-high percentage of the internet isn’t accessible through traditional search engines like Bing and Google? Welcome to the Darknet.
It can also be known as the Deep Web, but what it’s essentially what’s under the hood of the internet.
Let Keanu Reeves explain:
Most of it requires a special browser to access called Tor.
Since the revelations about NSA spying came to the surface earlier this year, everybody’s paying a little bit more attention to their privacy online. That’s good news for Tor, a suite of software and network of computers that enables you to use the internet anonymously. And for anyone who uses it.
Tor includes anonymity software as well as a special browser, but it’s the network that stands to benefit most from this spike in interest. In the past few weeks, there’s been a 100 per cent rise in the number of Tor clients. Although it’s unclear why sudden increase — NSA concerns have to be part of it, but doesn’t explain the full bounce — that’s an all-time record.
Tor is short for “The Onion Router”. This refers both to the software that you install on your computer to run Tor and the network of computers that manages Tor connections. Put simply, Tor enables you to route web traffic through several other computers in the Tor network so that the party on the other end of the connection can’t trace the traffic back to you. That way, the more Tor users there are, the more protected your info. As the name implies, it creates a number of layers that conceal your identity from the rest of the world.
The computers that handle the intermediary traffic are known as Tor Relays, and there are three different kinds of them: middle relays, end relays and bridges. Naturally, end relays are the final relays in the chain of connections, while middle handle traffic along the way. Anybody can sign up to be a middle router from the comfort of their own home without fear of being implicated in any illicit activity that might be bouncing off their connection. Those who host exit relays bear a bit more of a burden as they’re the ones who are targeted by police and copyright holders if any of that illicit activity is detected. Bridges are simply Tor relays that aren’t listed publicly, perhaps to shield them from IP blockers. It should be made clear that you don’t have to run a relay to use Tor, but it’s a nice thing to do.
So why is it going to change the world? Well, what you might know is that sites like Silk Road and all of its clones were lived on the Darknet, away from the visible internet.
It was a sharp wake up call for law enforcement agencies everywhere, as the war on drugs went online. It also became a marketplace for hitmen, weapons and other black market gear. Now that the idea of a black market eBay has gone viral, there’s no telling what will pop up on the dark net next.
Deep structured learning is one of computer science’s most intriguing disciplines. Essentially, it involves the creation of computer systems that can make reasoned decisions based on prior experience with learning data sets — in short, a computer that can “think” for itself. But how do you build a machine learning system that actually works? This PowerPoint presentation attempts to map out the entire process in a single slide.
Professor Andrew Ng is chief scientist at web services company Baidu and one of the brains behind Deep Image; the most accurate computer vision system in the world. At this year’s Nvidia GPU Technology Conference, Ng gave a speech on the principles of deep learning in machines, including a layman’s guide to building new systems that work.
The above slide provides the basic recipe for successful machine learning (start at the top left and follow the arrows to complete the steps). Ng explained the process thusly:
When I’m building a machine learning system, the first thing I ask is “does it do well on the training data?” If it doesn’t, then I would build a bigger network, or “rocket engine”, so you have more neurons, more weights to try and fit the training data well.
Once you fit the training data well, you see if it fits the test data or development data. If it doesn’t do well on the test data but you’re doing well on the training data, that means you’re overfitting. The most reliable cure for overfitting is to get more data, to get more rocket fuel.
And then you keep going around and around and around until eventually it does well in the training data, it does well in the test data and then hopefully you’re done.
Ng advised that this was a highly simplified explanation of what his job entails and often computer scientists still run into problems even after following these steps. At this point, you need to modify the network architecture. Or cast some black magic.
Changing the world is one thing, but changing the solar system is another. Thanks to Elon Musk, we’ll soon have commercially viable, relaunchable rockets that can make space-flight cheaper and more accessible than it ever has been before.
With SpaceX rockets, we may soon be able to achieve the dream of building a self-sustainable colony on Mars. It’s what’s next.
SpaceX already has the Falcon Heavy rocket in development, which has the power to lift the kind of load needed for a Mars mission and first settlement.
They are also working on Dragon V2, which is clearly designed with the idea of Mars (and Moon) landings thanks to a vertical rocket-based landing system that will allow for immediate fuel reload and relaunch.
While Dragon V2 won’t be the ship that will arrive to Mars, V3 will probably be the one. Space fans are already dreaming about this spaceship:
What’s your pick for future tech that will change the world? Tell us in the comments!
Jamie Condliffe, Chris Jager Campbell Simpson, Adam Clarke Estes and Eric Limer contributed to this article.