Subscribe to Choscura's Blog        RSS Feed

Some thoughts on cooperative robotics

Icon Leave Comment
This is a series of thoughts I've been slowly incubating lately, about the nature of several things for dynamic robotics. The organization of this is a bit haphazard, but it is organized in this way: first, a section detailing ways a cooperative system of robots (I will leave the interpretation of that word up to you: maybe it means simply computers, maybe humanoid androids, maybe robotic arms in an assembly line: ) will first communicate with each other- and possibly with humans as well. Second, ways that they will identify, undertake, and divide the labor for the various tasks that they undertake. Last, some thoughts on specific details that will help significantly close the gap between digital and human understanding of physical worlds, social environments, and so on.

On Interraction

Robots need a language. I don't say this because they need to be able to communicate with us (humans), I say this because languages enable a very high level of abstraction, which in turn allows for a very high transfer rate of data between humans. Think about it for a moment: if everybody had to describe what every item was, how it worked, and what it looked like every time they mentioned it, the whole of human society would never have advanced past the stone age.

Instead, we evolved and came up with a solution: give a specific phenomena a name, and describe the phenomena only when you need to. This has the twofold benefit of allowing us to classify items together with logical classifications, and allows us to bring each other up to speed on what needs to be understood very quickly by allowing us to identify the words that specify concepts that are unknown to us.

The tools for classifying and selecting from data are different in every language, and every language has exact and precise tools for specifying events, their order, and the relationships between phenomena (people, objects, and events). English is a good example of a language with a very strong chronological toolset: this is excellent for distributed work because it allows for coordination in an efficient manner ("Is everybody done with step 2? ok, lets move on to step 3!"). In the same way, other languages are more efficient in implying the relationships between other things, for example the Thai language implies the language between two people very efficiently by allowing them to select each other from a group of individuals with a choice of roughly nine commonly used words for the english word 'you', which can imply greater or lesser age of one or the other person, a physical or romantic relationship (both can be distinctly implied), a familial bond, friendship, authority and subservience, compliance or rebellion, and regard or disregard for the other person.

Some parts that any language a robot will use will need to have are as follows:
  • a means for implying an item
  • a means of implying actions
  • a means of implying the state of an item (but not limiting an item to known states)
  • a means of implying a set (this, these, that, those, them)
  • a means of selecting from an implied set (this one, that one) or of selecting multiple items from a set (these ones, those ones)
  • a means of sequencing actions
  • a means of implying historical or impending sequences

On the Division of Work

Robots (keeping in mind the very undefined nature of that word in this context) will need three distinct things. First, they will need an ability to identify which tasks will need to be done (which is another discussion in itself, to be covered at a later time). Second, they will need to identify computational work and sort it according to importance and difficulty: second they will need (eventually) to identify physical work and sort that according to necessity and according to the capability of the bots in a particular network of robots.

The nature of digital media allows a great many capabilities that traditional human communication does not. The ability, for example, to allow robots to share video feeds among each other would allow multiple entities to literally look at the world through each other's eyes, move each other's limbs, and take actions within the capabilities of those robots that they themselves might physically not have. Literal thoughts could be transmitted between different robots: knowledge could pass, in seconds, between not just one or two robots, but between all robots in a network. this illustrates an inherent danger along with the opportunity for cooperation that this creates. The dangers in this are twofold: First, there is always the danger in such networks that the transmission of data which is harmful to the system can occur, and that it can be forced, fabricated, and retransmitted nearly indefinitely. Second, and a not unrelated danger: that the robots could decide that we as humans are no longer worth listening to, or worse, that we are a threat and that an identifiable task (of as-yet undecided importance) would be to remove us as efficiently as possible. Since the first thing any network of robots such as this would exist as in any predictable form is as a weapon, it follows that a large majority (not a minority) of these robots will be armed and fully capable of the task at hand.

On the Details

There are only a handful of details in the physical world that separate humans from robots. the first is the availability of power (a gap that is closing quickly), the efficiency and availability of systems that a robot might physically interact with the world through (a gap which is also quickly closing). The larger gaps are all mental, and many of them seem to have to do with abstraction: this is nothing for us to be ashamed of, it doesn't mean that we as programmers are deficient, it means we haven't, as a group, really started working on these problems. One problem in particular that I have been working on is the problem of 3d interpretation of an environment, via polyscopic vision (note that I am not saying binocular or stereoscopic, because I am not implying two cooperative camera views, although in this case I am also not excluding it). With a true capability to process polyscopic vision, an efficient way of determining the dispostion of the surrounding environment of a robot can really happen. The biggest thing that seems to hold back the programmers who task themselves with this is that they want to compare objects that are on screen. This is terribly inefficient. The easiest way to compare the fields of vision between two cameras is to compare the hough transformations of strings, rows, of pixels at the angles between them that their fields of vision are determined to intersect. from here there are any number of things you can do with the data- I leave this to your imagination.
Posted Image

0 Comments On This Entry


Trackbacks for this entry [ Trackback URL ]

There are no Trackbacks for this entry

September 2020

2728 29 30   


    Recent Entries

    Search My Blog

    0 user(s) viewing

    0 Guests
    0 member(s)
    0 anonymous member(s)