How Cities Segregate the Rich and Poor
Here we test variations of Thomas Schelling's Segregation Model to observe how stark contrasts between groups of people form in large cities.
This work describes four variations of Thomas Schelling’s segregation model.
Thomas Schelling's Segregation Model is famously known for its ability to model the distribution of people in cities. This model underlies the explanation for why there tends to be a stark contrast between wealth and poverty in large cities.
It should be mentioned that when referring to agents of the same "color" here, I am not referencing skin color specifically but rather any similarities that might affect one's residential preferences (how quiet people are, how respectful people tend to be, whether people have dogs or cats, whether people like to have BBQs with their neighbors, income, etc.).
First, an empty grid of 40 by 40 cells (total positions = 1600) is generated to serve as the resident location for the agents. Equal lists of two distinct types of agents (red or blue) are generated, and numbered with the prefix character “R” for red, or “B” for blue, followed by their number in the original ordered list (i.e. R203 or B457). A subset of 700 agents for each color category were created, for a grand total of 1400 agents. The agents are then randomly placed on the 40 by 40 grid, leaving 200 empty spaces. Empty positions are marked with “0” and counted in a list denoted with q (q = total number of empty cells in the grid). Empty cells are shown as white circles, while red and blue represent the two primary types of agents.
Agents are supplied with information regarding their current position (x, y) on the grid and their color, which we refer to as “house” in our code. Agents review the eight agents closest to them (up, down, left, right, and four intermediate diagonals), and review the color of their neighbors. The happiness of an agent is dependent on whether it is in a neighborhood containing similar agents to itself, with a cut off value to denote baseline happiness being three (k = 3).
Agents are therefore happy in their current position if at least three out of the eight neighbor agents surrounding them are the same color. If agents are unhappy (k < 2), they will consider each of the empty locations (“0” positions) on the grid, and determine their prospective happiness if they were to move there. If an agent determines it will be happier at a given empty location, it will move there and leave its current position open for other unhappy agents to later consider. The randomly initialized agents are subjected to five different relocation policies that determine whether or not they will relocate to a given empty cell based on their happiness.
Policy 1: Random Move
The first relocation policy, Random Move, was implemented by moving all unhappy agents to new locations, as long as the new locations would provide the agent with at least three same-color neighbors. The algorithm dictated that for each agent considering relocation, only a portion of the total available empty locations would be considered (defined free_q = 100 with randomly defined free positions (of the total available locations q = 200). The agent would assess prospective new locations one at a time, and the agent would relocate to the first cell found that would make it happy, with at least k = 3 same color neighbors. Once an agent was happy, it would no longer relocate to a new position. If none of the free positions available would make the agent happy, the agent will move to any cell that will make the agent slightly happier (if current k = 1 and only one neighbor is of the same color, a position in which k = 2 will suffice). The algorithm is repeated for several epochs, until all agents have found a satisfactory position.
Policy 2: Social Network Recommendation
The second relocation policy, Social Network Recommendation, was implemented in a slightly different fashion. Agents each have a network, n, of “friends” all over the grid in which they can communicate with regarding potential future places to relocate to. When an agent wants to move, it contacts its friends to determine whether it may be happier moving closer to any of them. Each friend that an agent contacts will assess all cells in a p by p square around itself, and inform the agent of any suitable positions nearby that may make the agent happier. The agent gets a set of possible locations to move from based on its friends' recommendations, and will randomly move to one of those locations. If no friends can recommend a suitable location to move to, the agent remains in its current position.
Figure 1 compares the performance of Policy 1 (Random Move) and Policy 2 (Social Network Recommendation) (with varying values for n and p) over a series of runs. The following figure shows the percentage of happy agents over epochs to observe time to the point at which the number of happy agents was maximized or reached a plateau. Table 1 shows the Standard deviations for the policy 1 and 6 versions of policy 2.
Policy 3: Greedy Move
The third relocation policy is a variation of a greedy search algorithm, in which the agents not only want to relocate to a position in which they are happy (k = 3) but if given the opportunity, will maximize their happiness over time by continuously looking for other better positions to move to (more agents of the same kind in the vicinity will further maximize happiness. If no available locations have potential to increase the happiness of the agent being considered, then the agent remains in the same place. This greedy algorithm effectively maximizes the happiness of all agents to the highest degree possible, yielding a pattern that is highly segregated. Performance is similar to Policy 1 but lags slightly behind in early epochs.
Policy 4: Move Near Happy Friend
This policy is somewhat similar to the Social Network Recommendation relocation policy, but instead uses the probability of finding a suitable location near a friend of the same color who is already happy. The agent contacts its friends in locations nearby empty cells, and if their friend of the same color is happy in that location, the agent will move to the empty cell nearby them. If their friend is not happy, they will ask other friends regarding alternative open locations to potentially move to. The agent will also inquire whether friends of different colors are happy in their given locations nearby open cells, and if friends of a different color are not happy, then the agent may move to the empty location nearby the unhappy friend of a different color (with the assumption that they will move to a location which may have many friends of the same color). Performance is similar to Policy 1 also lags behind in early epochs.