In the Den of the Lionfish

Armed with a fresh cup of Colombian coffee, Ethan opened his laptop and reviewed the latest data. The numbers were looking very good. He optimistically envisioned a lot of smiles and nods from the audience at the upcoming presentation. His brief daydream was snapped by an on screen message from Unit 7. Ethan clicked the alert and a screen popped up to display the image Unit 7 was sending from its underwater location in the nearby reef. Ethan reviewed the message and easily recognized that it was a lionfish indeed. Unit 7 claimed that there were no divers in the vicinity and sent a panoramic image to support that claim. Ethan had seen enough information from the robotic submarine and clicked the confirmation button on the application page. According to Ethan’s recent numbers that action would seal the fate of this particular lionfish.

Common Lionfish

Unit 7 was one of many robotic submarines operating off the coast in an effort to control the invasive lionfish population. The environmental and economic effects of the species had reached critical levels. The robots communicated using on-board underwater modems that sent their signals to local buoys which then relayed the communication to the larger wireless network. Human operators received and sent signals to the units over the same The coverage of the swarm was coordinated by a centralized server in order to optimize coverage of the area.
Upon receiving confirmation of its current mission, Unit 7 locked in on the image of the lionfish and began its pursuit. Although the lionfish is a relentless predator and impervious to the attacks of most predators, it is not a very quick or agile swimmer. Advantage: the robotic submarine units. As Unit 7 closed within four feet of its target, it algorithmically analyzed the swimming pattern of its target and fired the small harpoon with a burst of compressed air. The tethered harpoon pierced the fish. As the fish struggled, the Unit 7 moved towards the surface and relayed an indication of its achievement. The message was a request for a rendezvous with a differently purposed robotic vehicle in the area: one that would transport the captured fish to the base station.

Upon rendezvous, Unit 7 and the transport carried out their aquatic choreography. The transport spotted the captured fish in tow and extend a sinewy hand that collapsed around the fish almost like a mechanical Venus flytrap leaf. With a quick signal to the harpoon, the barb was retracted and the submarine retracted the tether carrying it. The transports end effector pivoted towards the service and tossed the fish into its central storage unit designed to adequately preserve it. Although the fin rays of the fish are venomous, it is edible if prepared properly. There were several local charities that were benefiting from the captured fish.
The fleet of robotic submarines were part of a larger effort that utilized mobile robots to control invasive species across the country. When the public was made aware of the efforts there was some level of trepidation towards the thought of semi-autonomous robots hunting their prey amid the residents. It took some trials, time, and numbers to substantiate the proclaimed safety of the robots in general. One of the most attractive aspects of the robot programs was the “global off switch”. More organic control methods involved the introduction of species and substances, but those approaches ran the risk of introducing a new imbalance to the environment. Conversely, once their goal numbers were met, the robots would be switched off and removed with no ongoing impact or evidence of their historic presence.
This was the second killing for Unit 7 in the past hour. It returned to the floating base station and docked for a well-deserved inductive recharge.
Back at his desk, Ethan received a text message from his friend, Thomas, who was managing a team of land-based robots that were hunting Burmese pythons in the Miami-Dade area. The message included a picture of Mongoose Nine with its latest capture.

Simple tutorial on rosbridge and roslibjs

This tutorial demonstrates how to create a simple web page that communicates with ROS using rosbridge and roslibjs.
No previous knowledge of ROS is really necessary to try out the code in this tutorial but I will not go into great detail on the concepts of ROS and its commands and utilities.

Configuring your system for the tutorial

This section will step you the installations required to carry out the demo.

1. Install ROS

If you do not have ROS installed on your computer, install it following the instructions here.
I am using the hydro release on my computer. Other releases will probably work but the names and structures of topics and messages might be slightly different. Also, this tutorial will use the turtelsim demo program so you will need that installed. The desktop-full installation of ROS includes this demo. I chose to set my environment as indicated in section 1.6 to add the ROS environment variables to all new terminal windows. Finally, this tutorial will assume you are running ROS on Linux.

2. Install rosbridge

Open up a terminal window and type

 sudo apt-get install ros-hydro-rosbridge-suite

Detailed instructions can be found here, but note that they relate to the groovy release, not hydro.

Turtlesim introduction

Before I get into the details of rosbridge, I will use the turtlesim demo to introduce some fundamental ROS concepts and commands. If you encounter problems in this section, your computer is probably not configured correctly. Refer back to the installation links for details on setting everything up.

1. Run roscore

Open up a terminal window and type

 roscore

This command runs the ROS master program that runs the services and coordinates communication between publishing and subscribing nodes. More detail on those terms and concepts will follow.
You can minimize this terminal window after you start roscore.

2. Run the turtlesim simulator window

Open up a terminal window and type

 rosrun turtlesim turtlesim_node

This command will launch the application that displays the simulated turtle robot. ‘turtlesim’ is the name of the ROS package and ‘turtlesim_node’ is the name of the application within the package that will be executed. The turtle icon in the window is essentially listening for movement message instructions.
You can minimize the terminal window used to launch the simulator.

3. Run the turtlesim control window

Open a terminal window and type

 rosrun turtlesim turtle_teleop_key

This command will run the ‘turtle_teleop_key’ application node within the ‘turtlesim’ package. In order to send commands to the ROS master, this terminal will need to have focus as you type the left, right, up and down arrow keys.

4. See the list of ROS topics

Open a terminal window and type

 rostopic list

This command will list the currently available topics driven by roscore and the nodes that were launched. Topics are essentially channels that node applications can publish and subscribe to. The concept pub/sub messaging systems is a common model in software systems. I will not go into the ideas in detail but quickly say that the concept is fundamental to ROS and very relevant to a system were sensors, actuators, and so forth may be interdependent and interchanged.
One of the topics that you should see displayed by this command will be of particular interest to us. It’s fully qualified name is:
/turtle1/cmd_vel

5. Find the message type for the relevant topic

In the same terminal window type

 rostopic info /turtle1/cmd_vel

This command will display information about the topic including the type of messages that will be published and consumed. Messages are objects in the sense that they can be composed of primitive values and other structures containing primitive values. The message type for the /turtle1/cmd_vel topic is indicated as
geometry_msgs/Twist

6. Investigate the message structure

In the same terminal window type

 rosmsg show geometry_msgs/Twist

You will see this output

 geometry_msgs/Vector3 linear
   float64 x
   float64 y
   float64 z
 geometry_msgs/Vector3 angular
   float64 x
   float64 y
   float64 z

This output indicates that the geometry_msgs/Twist message structure is composed of two structures of another ROS-type: geometry_msgs/Vector3. The properties of this type within the geometry_msgs/Twist type are named linear and angular.
If you run this command in the terminal window

 rosmsg show geometry_msgs/Vector3

you will see that type is composed of three float64 properties named x, y, and z.

7. Monitor messages sent to the relevant topic

In the same terminal window type

 rostopic echo /turtle1/cmd_vel

This command will display information related to the messages published to the named topic, in this case: /turtle1/cmd_vel
The command will run in the terminal window until it is terminated with a Ctrl+C.

8. Run the demo

Finally, time to take the turtle for a ride. The three relevant terminal windows for this step are the simulator display (step 2), the control window (step 3), and the window that will echo messages sent to the relevant topic (step 7). Make sure all three windows are visible on your desktop.
Click in the control window to give it focus. Then use the arrow keys to rotate and move the turtle.
Observer the output in the topic echo window. Note how the values change depending on the keys you press. For example, if you press the up arrow you should see this output:

 linear:
   x: 2.0
   y: 0.0
   z: 0.0
 angular:
   x: 0.0
   y: 0.0
   z: 0.0

Before we start on the next section that investigates how rosbridge works, I will summarize the important points of this section:
The roscore master was started in order to manage the communication of messages between publishing and subscribing nodes
The simulator window node was launched as a subscriber to the topic relevant to the demo
A terminal window was opened to publish messages to the topic relevant to this demo

Controlling turtlesim from a web page

In this section we will build a minimal html page to control the turtle in the simulator.
The section will use rosbridge which includes a set of tools that provide a JSON API for communication with the ROS server. I should point out that I am fairly new to ROS in general. One of the first things I learned was that node applications were typically written in C++ or Python: two languages that I am not proficient in. So I was interested in the idea of rosbridge that would allow ROS communication using tools like JavaScript over WebSocket. This section will also use the ROS JavaScript library, rosblibjs. Much of what I am writing in this section is based on what I learned in this tutorial.

1. Launch rosbridge

Open a terminal window and type this command

 roslaunch rosbridge_server rosbridge_websocket.launch

This command will run rosbridge and open a WebSocket on port 9090 that our web page will use to communicate with ROS.

2. Create an html file control panel

This web page is intended to demonstrate how roslibjs and rosbridge can be used to communicate with ROS. The page will not employ best practices like the use of style sheets or JavaScript libraries like jQuery. I will annotate the web page with comments that will explain the important lines.

<!DOCTYPE html>
<html>
<head>
<!-- Based on demo found here:
http://wiki.ros.org/roslibjs/Tutorials/BasicRosFunctionality
http://wiki.ros.org/roslibjs/Tutorials/BasicRosFunctionality
-->

<!--
The next two lines bring in the JavaScript files that support rosbridge integration.
-->
<script type="text/javascript" src="http://cdn.robotwebtools.org/EventEmitter2/current/eventemitter2.min.js"></script>
<script type="text/javascript" src="http://cdn.robotwebtools.org/roslibjs/current/roslib.min.js"></script>

<script type="text/javascript" type="text/javascript">

// This function connects to the rosbridge server running on the local computer on port 9090
var rbServer = new ROSLIB.Ros({
    url : 'ws://localhost:9090'
 });

 // This function is called upon the rosbridge connection event
 rbServer.on('connection', function() {
     // Write appropriate message to #feedback div when successfully connected to rosbridge
     var fbDiv = document.getElementById('feedback');
     fbDiv.innerHTML += "<p>Connected to websocket server.</p>";
 });

// This function is called when there is an error attempting to connect to rosbridge
rbServer.on('error', function(error) {
    // Write appropriate message to #feedback div upon error when attempting to connect to rosbridge
    var fbDiv = document.getElementById('feedback');
    fbDiv.innerHTML += "<p>Error connecting to websocket server.</p>";
});

// This function is called when the connection to rosbridge is closed
rbServer.on('close', function() {
    // Write appropriate message to #feedback div upon closing connection to rosbridge
    var fbDiv = document.getElementById('feedback');
    fbDiv.innerHTML += "<p>Connection to websocket server closed.</p>";
 });

// These lines create a topic object as defined by roslibjs
var cmdVelTopic = new ROSLIB.Topic({
    ros : rbServer,
    name : '/turtle1/cmd_vel',
    messageType : 'geometry_msgs/Twist'
});

// These lines create a message that conforms to the structure of the Twist defined in our ROS installation
// It initalizes all properties to zero. They will be set to appropriate values before we publish this message.
var twist = new ROSLIB.Message({
    linear : {
        x : 0.0,
        y : 0.0,
        z : 0.0
    },
    angular : {
        x : 0.0,
        y : 0.0,
        z : 0.0
    }
});

/* This function:
 - retrieves numeric values from the text boxes
 - assigns these values to the appropriate values in the twist message
 - publishes the message to the cmd_vel topic.
 */
function pubMessage() {
    /**
    Set the appropriate values on the twist message object according to values in text boxes
    It seems that turtlesim only uses the x property of the linear object 
    and the z property of the angular object
    **/
    var linearX = 0.0;
    var angularZ = 0.0;

    // get values from text input fields. Note for simplicity we are not validating.
    linearX = 0 + Number(document.getElementById('linearXText').value);
    angularZ = 0 + Number(document.getElementById('angularZText').value);

    // Set the appropriate values on the message object
    twist.linear.x = linearX;
    twist.angular.z = angularZ;

    // Publish the message 
    cmdVelTopic.publish(twist);
}
</script>
</head>

<body>
<form name="ctrlPanel">
<p>Enter positive or negative numeric decimal values in the boxes below</p>
<table>
 <tr><td>Linear X</td><td><input id="linearXText" name="linearXText" type="text" value="1.5"/></td></tr>
 <tr><td>Angular Z</td><td><input id="angularZText" name="angularZText" type="text" value="1.5"/></td></tr>
</table>
<button id="sendMsg" type="button" onclick="pubMessage()">Publish Message</button>
</form>
<div id="feedback"></div>
</body>
</html>

 What Did We Do?

So what did we accomplish in this tutorial? Something pretty cool in my opinion: we created a new controller for the existing turtlesim node without modifying that code
at all. The decoupled publish/subscribe approach that ROS supports made this accomplishment possible. I could argue that the simple node we created is superior in some ways to the command window that comes with the complete ROS installation:

  • It seems that the arrow keys always send a 2 or -2. We can send any values using our web page to make the movements greater or finer grained.
  • As much as I tried, I could not send linear and angular values in the same message by pressing the keys simultaneously. We can do that with the web page which allows the turtle to travel in arc paths.

Of course we only published a message in this tutorial. I should point out that there is much more you can do with roslibjs including:

  • Subscribing to topics in order to receive messages
  • Utilizing services hosted within ROS
  • Retrieving a list of current topics within the ROS server

Next Steps

So what’s next? I think I’m going to get myself one of those Baxter robots for $25K, build the appropriate web application and never wash dishes again. Ok, maybe not yet…soon, but not just yet. There are probably a couple of other tracks I can progress on first.
Implementation on Raspberry Pi
I have another long term goal to build a disruptively affordable mobile robot platform and implement the first one as an outdoor rover. I imagine that the robot will be controlled by an SBC like a Raspberry Pi and involve an Arduino board. I have heard that some people have found it challenging to run ROS on the Raspberry Pi but it looks like there have been some successes as well. I imagine I would start by just running the minimal amount of ROS on the Raspberry Pi and use my desktop computer for development and debugging, etc. I imagine I could install Apache or Tomcat on the Raspberry Pi, but it may make sense to build a lightweight http server using libraries like Node.js and socket.io. I also want to try to use Cylon.js for tasks like communicating with the Arduino.

Better UI

Ok, I feel like we’re pretty close friends now so I will tell you this: the web page built in this tutorial is not all that attractive or slick. There are a lot of options for incorporation:

  • jQuery UI has a number of great widgets
  • jQuery mobile makes it very easy to develop applications for mobile devices
  • I know some great developers that are favoring Ember.js in order to create ambitious web applications

Looking forward to seeing what others to with rosbridge and roslibjs. Many thanks to everyone involved in these projects.

turtle

Robotic Explorations

The following story depicts a fictitious (at this point in time and as far as I know) company. If you happen to create a company based on this story please offer me at least a free tour when you make the inevitable fortune.

Outback_view_from_Chambers_Pillar

I experience one of those brief heart-stopping moments as the email appears in my Inbox. It is the name of the sender that catches my attention: Robotic Explorations. This email is the online invitation to my first self-guided tour. And I am going to explore the desolate reaches of the Australian Outback.

I had done one group tour in a South American rainforest with Robotic Explorations previously. The price was much lower mainly due to the larger number of virtual tourists, but after taking that tour I was determined to put myself in the driver’s seat and make my own path. But I am getting ahead of myself…I should probably explain what Robotic Explorations is all about.

As they say on their website, they offer their customers a unique virtual touring experience aided by a semi-autonomous robot. Basically they have a fleet of these very mobile rover-type robots. They are equipped with some kind of hybrid engine that provides lots of mileage very quietly (which is important when you want to spy on the local wildlife). Their website provides the interface to your robot which their local team deploys to your starting spot. Below is a picture of the interface you access from the web page.

rover-control

So I’m getting all ready to start my trip. I chose a 24 hour time period. I have my computer hooked up to some nice speakers so I can get a good listen into the environment and I bought a Google Chromecast so I can see what the robot is seeing on my HD big screen.

Ok, I just got the message: “Congratulations, your rover is activated. Happy trails!” And the picture is coming in…wow, so cool. I am virtually in the Australian Outback. I was wondering about bandwidth issues, especially since these tours are often in the middle of nowhere. Apparently Robotic Explorations has addressed that challenge in its touring areas. I have heard explanations ranging from line of sight laser signals beamed from towers to hovering broadband repeaters…whatever it is, the picture and sound quality are great.

The robot is asking me where we should go. Let me ask him for a quick video pan of the area. Alright, we are going to head northwest. The terrain map indicates some type of small forest. Maybe we will see some cool creatures. So I’ll just click on that area and my robotic ambassador will route there as best as he can.

My control panel just flashed a “Rough terrain…” indicator. Uh oh…our first set back. It seems that the rover has tumbled down a hill and is on its side. The panel indicates: “Rover overturned; activating outriggers…”. I can see by the camera picture that he is righting himself, and now we are back on our wheels again. The panel displays “Reattempting previous route with increased torque.” Very carefully the robot ascends the hill and reaches more level ground. Fantastic effort.

I have the side cameras activated as we are driving in case something catches my eye. Wow, something just caught my eye. Not an animal but a really great view of the landscape. Let me stop him now and take a snapshot. Forgot to mention, my trip is linked to my Facebook acThe_World_Factbook_-_Australia_-_Flickr_-_The_Central_Intelligence_Agency_21count so I can post these photos as I take them. The application also puts a pin in my map where I took the snapshot. I’m adding the caption: “Enjoying the late afternoon view of the Australian Outback (with a beer).” Ok, onward…

Hold on, I just saw an alert icon on my display. The laser sweep picked up some movement a little to our east. Let’s check it out. I just clicked on the alert icon where it appeared in the map view and the rover is now re-routing and approaching. One nice thing about their app: you can have different web URLs for your control panel vs. your camera views. So I have the camera views on my big screen and I’m actually using their iPad app to control the robot. The rover is getting close to the source of the motion so I got the message: “Switching to stealth mode”. I clicked “Ok”. I think this runs the motor on all battery to make him real quiet on approach. I think I see something. I’m stopping the rover and zooming in. It’s a bunch of birds around a small pond. Strange looking birds, cool. Going to snap another photo for Facebook. Just noticed my friend posted a comment on my last picture: “What the hell are you doing in the Australian Outback?”

I’m back…I’ve been a little lazy about updating this post because I’m really enjoying just wandering around and feeling like I’m out there in the great abandon. Another movement alert up ahead. It’s horses…three small horses wandering by. I click on a horse in the camera display and select “Track with rover”. This action makes the rover follow them from an adjustable offset. We follow them for a ways as they head towards a large rock formation. They enter a narrow ravine between two steep rocks and we continue after them. The robot’s camera adjusts for the dimmer light and activates the picture stabilization. Suddenly one of the horses lets out a snort and they race off. I don’t think we will be able to catch them. I zoom in on the satellite picture to see if we can make it through this ravine. The rover’s LIDAR is optimistic about a way out so we continue.

My attention is drawn to the left side camera. What I first thought was just discolorations on the rock walls that line this ravine are actually…paintings. They remind me of prehistoric cave paintings. It seems to depict some sort of creature with claws being held off by people.
rock-paintingIt’s a little difficult to describe. I’ll take some photos and post on Facebook to see what other people think of it. I remember that my contact at Robotic Explorations indicated my rover was being deployed to a very remote part of the Outback seldom hiked by people due to its desolation. My rover was delivered to its starting point by a small helicopter. Could it be that my photographs are the first to be taken by an outsider? Am I the first visitor to this land to make this discovery? Although my photos will record the latitude and longitude I am definitely marking this point with a trip pin.

It’s getting dark now. I think I am going to take a look at the sky with the panoramic. The robot camera is adjusting for the dim light. Now I can see stars…lots of stars. Another photo opp. Beautiful night in the Australian OutbackWhat’s that? Rover picked up significant sound reading not too far off. I click on the direction to indicate my approval. Let’s go.

Once again we go into stealth mode as we approach objects in motion. Too dark so night vision is enabled. Not as brilliant as day time shots but what are you going to do? Ooh, I just saw the reflection of some glowing eyes (enough moonlight for that I guess). Wow, about four…nope, at least eight dog things. Maybe these are dingoes. Did I mention I’m not an expert on the Australian Outback? Doesn’t matter to me; makes it even more interesting and surprising in some ways. Yeah, I’m seeing a pack of these dogs eating something on the ground. Looks like feathers, some kind of very big bird. I’m going to tell Rover to move in very slowly. This warrants video; I have enough allowance for some footage on this trip. Audio is good too. Can hear them yelping at each other as they compete for the best eating spots. After a while, they disperse into the darkness.

I should mention I really can’t go anywhere I want. The map has boundaries, I guess limited to where Robotic Explorations has worked out touring rights. If I try to go beyond the boundaries, the robot won’t let me. But it doesn’t matter, the area is bigger than what I could probably see in a year. Speaking of that, this tour route (the route of points to where I actually go) will be saved with my account. So if I do happen to want to come back here, I can overlay previous trips to revisit key pinned places or make sure I explore new areas.

I’m only a couple of hours into this trip but I can’t explain how cool it is to feel like I’m exploring this part of the world “on my own”. I should mention that Robotic Explorations has another option that is more expensive but sounds unbelievably cool. You can rent a rover that is also a docking station for a UAV drone. The drone uses the fuel powered rover as a charging station when it has to. Basically you can release the drone and get an aerial view (and photos and videos) from where you are. You control the altitude with the control panel and click the destination as well – just click on the map and the drone will go there. There is also a feature to track an object that the drone camera recognizes it. So the drone will follow your target with its camera if it moves. With one click you can return the drone to the docking station rover. I have also heard rumors of plans for submarine tours in the future. Going to start saving my pennies for the next trip.

So how should I answer all these “where ARE you??!!’ comments popping up in Facebook?

( 8^])X

Photos that were not purchased were taken from Wikimedia Commons and friends of mine kind enough to share.
Wireframe mockups were developed with draw.io.