Philosophy Phriday: To Understand Ant Communication, We Can’t Forget What Ants Forget

The Daily Ant hosts a weekly series, Philosophy Phridays, in which real philosophers share their thoughts at the intersection of ants and philosophy. This is the fifty-fourth contribution in the series, submitted by Dr. Daniel Singer.


To Understand Ant Communication, We Can’t Forget What Ants Forget

It is well-known (to any reader of this blog, anyway) that ant communication is very complex and not entirely well-understood. Among myrmecologists, there is disagreement about how information is transferred (most think that pheromones play a key role, but some think there may be other mechanisms at play, including sound), what kind of information is transferred, and whether we should explain ant communication in terms of the communication behaviors of individuals or groups.

EciBur13-X2
A blur of communication. Image: Alex Wild

A common technique for investigating complex phenomena is to start with a simple model of the phenomenon to be explained and use that model to guide our later research. So if we were to start with a simple model of ant communication, what might that look like? A natural starting place would be to model ant communication by a simple network diffusion model. In such a model, we’d start with a collection of ants, each of which has some “neighbor” ants that they can communicate with.  Waters and Fewell (2012) investigated the networks created by harvester ants, so those networks might be a good place to start. The development over 60 seconds of one such network is represented below in A.

Picture1

At the start of the model, some ants are represented as having information, such as the location of some food, the location of an enemy, or the best way to get to some destination. The model then proceeds step by step, where if an ant has some information at one step, with some probability, the ants neighbors get that information at the next step. Information spreads across the network, and in a connected network, we’d expect everyone to get it eventually. In this model, the movement of information mirrors the movement of an infection.

It’s not clear that ant communication can be fruitfully understood with this simple model though. For one, why think that information moves in ant networks like in an infection rather than like estimations of quantities, which are more naturally modelled as transferring by something more like averaging with one’s neighbors? Or might it be best modelled like genetic information transfer, which involves mutation and crossover? Simple models of communication like these have been explored by authors like Hegselmann and Krause (2002), Zollman (2007), and Weisberg and Muldoon (2009).  In a 2015 article in Philosophy of Science, Patrick Grim and I along with our co-authors show that these different forms of information transfer have very different dynamics and levels of fitness.

There’s something unsatisfying about all of these simple models of communication though. In each model, the ants are treated as mere conduits of information, transferring to their neighbors (perhaps with some probability) the entirety of what they know on each step of the model. Real communication doesn’t involve just holding up flags that convey everything you know. In real communication (at least with humans), we offer our evidence for various propositions, respond to the arguments of others, and the information we provide can combine with the others’ information to give us much more knowledge than any of us had before.

Dung (1995) developed a system of abstract argument structures to model exchanges of arguments. Dung’s models are complex and employ a background network of support and attack relations between arguments. Betz (2016) shows why Dung’s models aren’t great models to understand the epistemology (rather than the logic) of dialectics, and Betz (2013) explores a sophisticated alternative to Dung’s framework for modeling the exchange of information.

What I’d like to suggest here is that a much simpler model, one that is an extension of the diffusion idea from above, might give us some more traction in understanding communication without the added complexities of models like Dung’s and Betz’s. The idea is to start with propositions that the ants know. We can suppose those are about things like where a certain pheromone or food-scent was encountered. Some collections of these propositions constitute arguments for things like where some enemy is, where some food is, or which way is shortest to get to some destination. And finally, each of the arguments has a strength at which it supports what it supports.

paripungens1-X2
This ant has a proposition – and a pupa – to share with you. Image: Alex Wild

For simplicity, let’s assume that the ants are communicating about only a single main question, perhaps whether some collection of other ants are friends or foes. Then, like in the simple diffusion model described above, we can model ant communication as the exchange of propositions that bear on the main question, not the exchange of all things considered views about the question. Since the propositions are premises of arguments for or against positions on the main question, we can think of the exchange of propositions as the ants sharing why they think that the other ants are friendly or not. The new model involves exchanges of reasons for belief, so the new model avoids the worries about ants just being mere conduits of information. And also, because propositions shared by different ants can combine to form an argument that no individual ant had before, the new model can make sense of how information aggregation can lead to knowledge that is not just the simple conjunction of the individual pieces of information had by individual ants.

Supposing that the agents have many reasons for believing what they do, a couple of natural questions arise: how do agents choose which propositions to share when it’s their turn to speak? And what way of choosing what to share would be best for the whole group? Essentially, these are the questions that myrmecologists want to answer in understanding how individual ants contribute to the group knowledge. The analogous questions for humans are questions about how group discussions work and how they can be optimized. Those questions are studied by researchers in organizational dynamics, management, psychology, law, and philosophy.

A central example of this work on humans is the work on the hidden profile paradigm in social psychology, which shows that groups tend to focus on their antecedently shared information rather than bringing new information to the forefront (see Stasser and Titus 1985, 1987; for a survey, see Wittenbaum, et al, 2004). In light of that research, many have recommended that individuals “speak up” when they disagree or have something new to add to the discussion and have tested methods for encouraging people to do so.  The assumption is that it’s epistemically-speaking better for the group if individuals contribute more of their private, non-shared information to the group than if they keep it to themselves.

More sophisticated accounts of how individuals should contribute to groups can be found in many places, including much of the work of Cass Sunstein (particularly Sunstein 2002). Sunstein and Hastie (2014), for example, give six bits of advice to groups to encourage better information sharing. These include that the group include a “red team”, which should try to adduce arguments against certain proposals. Using the model described above, we can test theories of how communication works (and could work better) simply by implementing the proposed mechanisms and seeing how it effects information dispersion in the group.

invicta2-X2
The red team. Image: Alex Wild

Consider three simple ways ants might share information they have about the friendliness of the other ants: First and most simply, an ant might share a piece of information at random. I’ll call this “random sharing”. Second, the ant might share the piece of information that adds the most to the conversation, in light of what has already been shared (i.e. it has the greatest impact on the strength to which the publicly-available arguments support what they support). I’ll call this “influential sharing”. Finally, the ants might act adversarially by sharing the piece of information that most influences what is supported by the publicly-available arguments in the direction of what they already believe. I’ll call this “biased sharing”.

By abstracting away the contact network (assuming that it’s a complete network), under a wide range of conditions, these different sharing methods do have a major impact on what the members of the group believe. Below, you’ll see a table of what proportion of the group has the attitude about the main question that’s supported by all of propositions. To collect these data, we assumed that there are 25 agents, 100 propositions, 100 subsets of size 1 to 3 of those propositions are designated as arguments, arguments are assigned a supported content at random, the strengths of those supports are assigned by an exponential distribution with a mean of 1, that everyone starts out with 10 random (possibly different) propositions, and that at each round of the model, one random agent is chosen to speak according to their sharing rule. [Phew, that’s a lot of math stuff! — Editor]

Picture2

What we see is that groups of influential sharers do significantly better than groups of the other kinds. Adversarial groups do the worst, performing significantly worse than even groups that simply share information at random. So those who advocate for different sharing methods might be on to something, as the model shows us how sharing methods could make a big difference.

An important aspect of these results is that they assume that each individual ant can remember all of the information they hear. Of course, ants (and humans) have limited memories, so this is an unrealistic assumption. Does its unrealism pose a problem? Unrealism in models isn’t always a problem. Undampened simple harmonic oscillator models of spring movements are good models despite the fact that no real springs are undampened. That said, if the unrealistic assumptions makes a big difference to the qualitative character of the model, it might be interesting to see the model with the unrealistic assumption relaxed.

So suppose we limit the memories of the individual ants. We then have to decide how the ants should manage their limited memories once they’re full. Suppose we have an ant who has a memory limit of 10 propositions (and assume that arguments do not take up any additional memory since they are constituted by propositions). Consider three ways the ant might deal with an incoming 11th proposition when they already have 10: first, the ant might just forget one of the 11 propositions at random and remember the rest. I’ll call this “random memory”. Alternatively, the ant might forget the proposition that contributes the least informational content to what they believe (i.e. they would forget the proposition who’s inclusion in the memory contributes the least overall strength to either potential belief content). I’ll call this “weight-minded memory”. Another alternative is that the ant could place a premium on the coherence of their belief state and thereby forget a reason that goes against what they would all-things-considered believe on the basis of all 11 pieces of information. For precision, let’s assume they drop the piece of information that has the least information content that goes against what is supported by all 11 reasons. I’ll call this “coherence-minded memory”.

When we model each of these, what we see is that how the ants manage their memory has a transformative impact on the results. Table 2 shows the impact of the different memory management methods on what percent of the group has the attitude supported by all the propositions. What we see is that when the ants’ memories are limited, the relative influence of the sharing methods change. Recall that for unlimited agents, biased sharing for information performed significantly worse than the other two methods. But when ants with full memories forget things at random, groups who share information in a biased manner do significantly better than groups that use either of the two other ways of sharing information (p < .05 in both cases for t-tests and Wilcox Rank Sum tests). More generally, the methods that limited agents use to share information have a very muted impact on the group outcomes compared to unlimited agents. Whereas for unlimited agents, the sharing method made the difference between 63% of the group getting it right and 88% getting it right (a 25 percentage point difference), for these limited agents, the difference is less than 6 points.

Picture3

When we look at data from a much larger dataset (including 1000 runs from each of 288 different combinations of parameters), we see that the two general results described above are robust across those parameter settings: for unlimited agents, which sharing method is used makes a significant difference to the outcome, and for limited agents, the significance of the sharing methods is much more muted.

If we hold fixed the methods agents use to share information and look at how different memory methods affect the outcome, we see large and consistent differences. Table 3 shows these data. For all of the sharing rules, weight-minded memory-management is best, coherence-minded is second, and random remembers do worst. As the table shows though, the magnitude of these differences is influenced by how agents share their information. Whereas for random and influential speakers, the average performance of groups with different memory rules differs by 15.4 points and 12.8 points, respectively, for biased speakers, that difference is only 3.7 points.  What we see then is that biased sharing of information mitigates the impact of how agents manage their memories. There are no similarly consistent patterns to be found in holding fixed the memory methods and looking at the impact of different sharing methods.

Picture4

In general then, these data suggest that how agents manage their memory is a greater factor in the overall performance of the group than how the agents share their information. This is further confirmed by comparing the sum of squared differences, where we see that the amount of variation attributable to the memory method is much higher than the amount attributable to the sharing method: how agents forget the information explains 86 times as much variation in the outcome as how agents share their information! Of course, what percentage of the group has the right attitude after 1000 steps of the model isn’t the only metric of group performance we might be interested in. We might want to look at data from other steps, but we might also want to know how the sharing and memory rules affect whether and how quickly the groups formed a consensus, whether the group has the right attitude when it does converge, and whether and how quickly members of the group stop changing their belief (regardless of whether there’s a consensus). In each of these cases, the same general lesson as above applies: how individuals in a group management their memory has an impact that’s at least roughly on par with the impact of how agents share their information.

There are many potential upshots of this quick tour of modelling ant communication and many future directions this kind of research could take. I hope that readers will explore some of those on their own, but I would like to highlight one upshot before closing: What these models suggest is that if we’re invested in understanding ant communication, a focus solely on the outward-facing behaviors of ant communication behavior might be misguided. Ant communication (and presumably human communication) could be as influenced by individual features of ants (such as how they manage their memory) as it is by the features of what is traditionally conceived of as part of how they communicate, like what pheromones are used when, the structure of their communication network, and whether sound is used.

 

Much of the work described here was done in collaboration with Patrick Grim, Aaron Bramson, William “Zev” Berger, Karen Kovaka, and Jiin Jung.  For more information about our group, see the website for the Computational Social Philosophy Lab.


Dr. Daniel J. Singer is an Assistant Professor of Philosophy at the University of Pennsylvania. His research intersects epistemology, ethics, and social philosophy, and is primarily motivated by two questions: (1) how and why epistemic norms apply to us, and (2) how epistemic norms for groups differ from norms for individuals. He investigates both questions using traditional philosophical methods, but as director of the Computational Social Philosophy Lab, Dr. Singer also uses agent-based computer simulations to investigate those questions (as well as other questions in political philosophy, social epistemology, and philosophy of science).