Seed AI

The intelligence explosion is a possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown. An intelligence explosion would be associated with a technological singularity. The notion of an "intelligence explosion" was first described by Good (1965), who speculated on the effects of superhuman machines, should they ever be invented: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[1] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humans.[2] If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI[3][4] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. The existence of multiple paths to an intelligence explosion makes a singularity more likely; for a singularity to not occur they would all have to fail.[5] Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find. Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to advance the singularity.[citation needed] Whether or not an intelligence explosion occurs depends on three factors.[6] The first, accelerating factor, is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally the laws of physics will eventually prevent any further improvements. There are two logically independent, but mutually reinforcing causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used.[7] The former is predicted by Moore’s Law and the forecast improvements in hardware,[8] and is comparatively similar to previous technological advance. On the other hand, most AI researchers[who?] believe that software is more important than hardware.[citation needed] A 2017 email survey of authors with publications at the 2015 NIPS and ICML machine learning conferences asked them about the chance of an intelligence explosion. 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely".[9] Speed improvements[edit] Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. Oversimplified,[10] Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[11] An upper limit on speed may eventually be reached, although it is unclear how high this would be. Hawkins (2008)[citation needed], responding to Good, argued that the upper limit is relatively low; Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can be. We would end up in the same place; we'd just get there a bit faster. There would be no singularity. Whereas if it were a lot higher than current human levels of intelligence, the effects of the singularity would be great enough as to be indistinguishable (to humans) from a singularity with an upper limit. For example, if the speed of thought could be increased a million-fold, a subjective year would pass in 30 physical seconds.[5] It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain. Algorithm improvements[edit] Some intelligence technologies, like "seed AI",[3][4] may also have the potential to make themselves more efficient, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on. The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed] An AI which was rewriting its own source code, however, could do so while contained in an AI box. Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life had been a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[12] There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI may not be invariant under self-improvement, potentially causing the AI to optimise for something other than was intended.[13][14] Secondly, AIs could compete for the scarce resources mankind uses to survive.[15][16] While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[17][18][19] Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[20] An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."[21] Impact[edit] Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[22] Superintelligence[edit] Further information: Superintelligence A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. Technology forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Existential risk[edit] Main article: Existential risk from artificial general intelligence Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[23][24][25] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[26] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[15][27] and humans would be powerless to stop them.[28] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[19] Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause: When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[29] Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that the first real AI would have a head start on self-improvement and, if friendly, could prevent unfriendly AIs from developing, as well as providing enormous benefits to mankind.[18] Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion,[30] unintended instrumental actions,[13][31] and corruption of the reward generator.[31] He also discusses social impacts of AI[32] and testing AI.[33] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. However, a sufficiently intelligent AI may simply be able to escape by outsmarting its less intelligent human captors.[34][35][36] Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking believes more should be done to prepare for the singularity:[37] So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Hard vs. soft takeoff[edit] In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.[38][39] Ramez Naam argues against a hard takeoff by pointing out that we already see recursive self-improvement by superintelligences, such as corporations. For instance, Intel has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law.[40] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."[41] J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[42] Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a very hard, 5-minute takeoff but thinks a takeoff from human to superhuman level on the order of 5 years is reasonable. He calls this a "semihard takeoff".[43] Max More disagrees, arguing that if there were only a few superfast human-level AIs, they wouldn't radically change the world, because they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More also argues that a superintelligence would not transform the world overnight, because a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."[44] See also[edit] Accelerating change Artificial consciousness Flynn effect Human intelligence § Improving intelligence Neuroenhancement Outline of transhumanism Postbiological evolution Robot learning

Papers, Please

Papers, Please From Wikipedia, the free encyclopedia For the police state trope, see Your papers, please. Papers, Please Papers Please - Title Logo.png Developer(s) 3909 LLC Publisher(s) 3909 LLC Designer(s) Lucas Pope Platform(s) Microsoft Windows, OS X, Linux, iOS, PlayStation Vita Release Windows, OS X WW: August 8, 2013 Linux WW: February 12, 2014 iOS WW: December 12, 2014 PlayStation Vita WW: TBA Genre(s) Puzzle Mode(s) Single-player Papers, Please: A Dystopian Document Thriller is a puzzle video game created by indie game developer Lucas Pope, developed and published through his company, 3909. The game was released on August 8, 2013, for Microsoft Windows and OS X, for Linux on February 12, 2014, and for the iPad on December 12, 2014. A port for the PlayStation Vita was announced in August 2014. Papers, Please has the player take the role of a border crossing immigration officer in the fictional dystopian Eastern Bloc-like country of Arstotzka, who has been and continues to be at political hostilities with its neighboring countries. As the officer, the player must review each immigrant and returning citizen's passports and other supporting paperwork against a list of ever-increasing rules using a number of tools and guides, allowing in only those with the proper paperwork, rejecting those without all proper forms, and at times detaining those with falsified information. The player is rewarded in their daily salary for how many people they have processed correctly in that day, while being fined for making mistakes; the salary is used to help provide shelter, food, and heat for the player's in-game family. In some cases, the player will be presented with moral decisions, such as approving entry of a pleading spouse of a citizen despite the lack of proper paperwork, knowing this will affect their salary. In addition to a story mode which follows several scripted events that occur within Arstotzka, the game includes an endless mode that challenges the player to process as many immigrants as possible. Pope came upon the idea of passport-checking as a gameplay mechanic after witnessing the behavior of immigration officers through his own international travels. He coupled this with a narrative inspired by spy thriller films, having the immigration officer be one to challenge spies trying to move in or out of countries with fake travel documents. He was able to build on principles and concepts from some of his earlier games, including his The Republia Times from which he also borrowed the setting of Arstotzka. Pope publicly shared details of the game's development from its onset, leading to high interest in the title and encouraging him to put more effort into the title; though he initially planned to only spend a few weeks, Pope ended up spending about nine months on the title. Papers, Please was positively received on its release, and it has come to be seen as an example of an empathy game and a demonstration of video games as an art form. The game was recognized with various awards and nominations from the Independent Games Festival, Game Developers Choice Awards, and BAFTA Video Games Awards, and was named by Wired and The New Yorker as one of the top games of 2013. Pope reported that by 2016, more than 1.8 million copies of the title had been sold.

Gameplay[edit] The gameplay of Papers, Please focuses on the work life of an immigration inspector at a border checkpoint for the fictitious country of Arstotzka in the year 1982.[1] At the timeframe of the game, Arstotzka has recently ended a six-year long war with a neighboring country, and political tensions between them and other nearby countries remain high. As the checkpoint inspector, the player reviews arrivals' documents and uses an array of tools to determine whether the papers are in order for the purpose of arresting certain individuals such as terrorists, wanted criminals, smugglers, and entrants with forged or stolen documents; keeping other undesired individuals like those with no polio vaccine including anti-vaxxers, expired vaccines, missing required paperwork, or expired paperwork out of the country; and allowing the rest through. For each in-game day, the player is given specific rules on what documentation is required and conditions to allow or deny entry which become progressively more complex as each day passes. One by one, immigrants arrive at the checkpoint and provide their paperwork. The player can use a number of tools to review the paperwork to make sure it is in order. When discrepancies are discovered, the player may interrogate the applicant, demand missing documents, take the applicant's fingerprints while simultaneously ordering a copy of the applicant's identity record in order to prove or clear either name or physical description discrepancies, order a full body scan in order to clear or prove weight or apparent biological sex discrepancies, or find enough incriminating evidence required to arrest the entrant. There are opportunities for the player to have the applicant detained and the applicant may, at times, attempt to bribe the inspector. The player ultimately must stamp the entrant's passport (or temporary visa slip if the individual has no passport) to accept or deny entry unless the player orders the arrest of the entrant. If the player has violated the protocol, a citation will be issued to the player shortly after the entrant leaves. Generally the player can make two violations without penalty, but subsequent violations will cost the player increasing monetary penalties from their day's salaries. The player has a limited amount of real time, representing a full day shift at the checkpoint, to process as many arrivals as possible. The player's immigration checkpoint workstation shows the current arrival (left center), the various paperwork the player is currently processing (bottom right), and the current state of the checkpoint (top half). At the end of each in-game day, the player earns money based on how many people have been processed (5 credits for each individual that enters the booth before the shift ends) and bribes collected, less any penalties for protocol violations, and then must decide on a simple budget to spend that money on rent, food, heat, and other necessities in low-class housing for themselves and their family. The player must also make certain not to earn too much money in illegitimate ways, lest his family be reported and have all the money they had accumulated thus far confiscated by the government. As relations between Arstotzka and nearby countries deteriorate, sometimes due to terrorist attacks, new sets of rules are gradually added, based on the game's story, such as denying entry to citizens of specific countries or demanding new types of documentation. The player may be challenged with moral dilemmas as the game progresses, such as allowing the supposed spouse of an immigrant through despite lacking complete papers at the risk of accepting a terrorist into the country. The game uses a mix of randomly generated entrants and scripted encounters. Randomly generated entrants are created using templates. A mysterious organization known as EZIC also appears, with several of its members appearing at the checkpoint, giving the inspector orders to help bring down the government and establish a new one; the player can choose whether to help this organization or not, letting their members through to assassinate certain powerful individuals the organization deems too corrupt to live and even personally killing two high-ranking officials for the organization. The game has a scripted story mode with twenty possible endings depending on the player's actions, as well as some unlockable randomized endless-play modes.[2][3] Development[edit] Lucas Pope accepting an award for the game at the 2014 Game Developers Conference Papers, Please was developed by Lucas Pope, a former developer for Naughty Dog for the Uncharted series.[4] Pope opted to leave Naughty Dog around 2010, after Uncharted 2: Among Thieves was released, to move to Saitama, Japan, along with his wife Keiko, a game designer herself. Part of this move was to be closer to her family, but Pope also had been developing smaller games along with Keiko during his time at Naughty Dog, and wanted to move away from "the definite formula" of the Uncharted series toward developing more exploratory ideas for his own games.[5][6] The two worked on a few independent game titles while there, and they briefly relocated to Singapore to help another friend with their game.[5] From his travels in Asia and some return trips to the United States, he became interested in the work of immigration and passport inspectors, who he described "They have a specific thing they’re doing and they’re just doing it over and over again."[5] He recognized the passport checking experience, which he considered "tense", could be made into a fun game.[1][3] While he had been able to come up with the mechanics of the passport checking, Pope lacked a story to drive the game. He was then inspired by films like Argo and the Bourne films, which feature characters attempting to infiltrate into or out of other countries with subterfuge. Pope saw the opportunity to reverse those scenarios, putting the player as the role of the immigration officer as to stop these types of agents, matching up with his existing gameplay mechanics.[5] He crafted the fictional nation of Arstotzka, fashioned as a totalitarian, 1982 Eastern Bloc state, with the player guided to uphold the glory of this country by rigorously checking passports and defeating those that might infiltrate it.[5] Arstotzka was partially derived from the setting of Pope's earlier game The Republia Times, where the player acts as editor-in-chief of a newspaper in a totalitarian state and must decide on which stories to include or falsify to uphold the interests of the state.[7] Pope also based aspects of the border crossing for Arstotzka and its neighbors on the Berlin Wall and issues between East and West Germany, stating he was "naturally attracted to Orwellian communist bureaucracy".[8] He made sure to avoid including any specific references to these inspirations, such as avoiding the word "comrade" in both the English and translated versions, as it would direct allude to a Soviet Russia implication.[6] Using a fictional country gave Pope more freedom in the narrative, not having to base events in the game on any real-world politics and avoiding preconceived assumptions.[7] Work on the game began in November 2012; Pope used his personal financial reserves from his time at Naughty Dog for what he thought would be a few weeks worth of effort to complete and then move onto a more commercially-viable title.[5] Pope used the Haxe programming language and the NME framework, both open-source.[9] He was able to build up structures he and his wife developed for Helsing's Fire, an iOS game they developed after moving to Japan, as this provided the means to set how much information about a character could or could not be shown to the player. This also enabled him to include random and semi-random encounters, in which similar events would occur in separate games, but the immigrant's name or details would be different.[7] Much of the game's design was about the purposely-"clunky" user interface elements of checking paperwork, something that Pope was inspired by from his earlier programming experiences from using visual programming languages like HyperCard.[6] Pope found that there was a very careful balance of what rules and randomness could be introduced without overwhelming the player or causing the balance of the game to falter, and cut back on some of the randomness he initially wanted.[7] Pope attempted to keep the narrative non-judgemental about the choices the player made, allowing them to imagine their own take on the events, and further kept elements like the player-character's family status screen shown at the end of each day simple so that it would not affect the player's take on these results.[7] As Pope developed the game, he regularly posted updates to TIGSource, a forum for independent developers, and got helpful feedback on some of the game's direction.[5] He also created a publicly available demonstration of the game, which further had positive feedback to him. Pope opted to try to have the game submitted to the Steam storefront through the user-voted Greenlight process in April 2013; he was hesitant that the niche nature of the game would put off potential voters and had expected that he would gain more interest from upcoming gaming expositions. However, due to attention drawn by several YouTube streamers that played through the demo, Papers, Please was voted through Greenlight within days.[5][9][10] With new attention to the project, Pope estimated that the game would now take six months to complete, though it ultimately took nine months.[4] One area he expanded on was to create several unique character names for the various citizens that would pass through the game. He opened up to the public to supply names, but ended up with over 30,000 entries, with more than half he considered unusable as they did not figure the types of Eastern European names he wanted or were otherwise "joke names".[5] After the Greenlight process, Pope started to add other features that required the player, as a lowly checkpoint worker, to make significant moral decisions within the game. One such design was the inclusion of the body scanner, where Pope envisioned that the player would recognize this being an invasion of privacy but necessary to detect a suicide bomber.[5] These also helped to drive the game's narrative as to provide rationale for why the player as the passport checker would need to have access to these new tools in response to the larger events in the game's fiction.[6] After being successfully voted on Greenlight, Papers, Please was being touted as an "empathy game", similar to Cart Life (2011), helping Pope to justify his narrative choices.[5] Pope also recognized that not all players would necessarily appreciate the narrative aspects, and started to develop the "endless" mode where players would simply need to check on an endless stream of immigrants until they messed up too many times.[8] Pope released the game on August 8, 2013 for Windows and OS X systems,[3] and for Linux machines on February 12, 2014.[11] Pope had ported the game to the iPad, and is considering a port to the PlayStation Vita though noted that with the handheld, there are several challenges related to the game's user interface that may have to be revamped.[12] The Vita version was formally announced at the 2014 Gamescom convention in August 2014.[13] With the iOS release, Apple required Pope to censor the full body scanner feature from the game, considering the aspect to be pornographic content.[14] However Apple later commented that the rejection was due to a "misunderstanding" and allowed Pope to resubmit the uncensored game by including a "nudity option".[15] The iPad version was subsequently released on December 12, 2014.[16] However, it is still rated 17+ on the App Store.[17] By March 2014, Pope stated that he was "kind of sick to death" of Papers, Please, in that he wanted to continue to focus on more smaller games that would only take a few months of time to create and release, and had already spent far too much in his mind on this one. He expected to keep supporting Papers, Please and its ports, but had no plans to expand the game or release downloadable content, but does not rule out revisiting the Arstotzka setting again in a future game.[6] Reception[edit] Reception Aggregate score Aggregator Score Metacritic 85/100[18] Review scores Publication Score Edge 9/10[19] Eurogamer 9/10[20] GameSpot 8/10[21] IGN 8.7/10[22] PC Gamer (US) 87/100[23] Polygon 8.0/10[24] Award Publication Award BAFTA Best Strategy & Simulation Papers, Please received positive reviews on release, with a 85 out of 100 rating from 40 reviews.[18] Papers, Please has been praised for the sense of immersion provided by the game mechanics, and the intense emotional reaction.[25] CBC News' Jonathan Ore called Papers, Please a "nerve-racking sleuthing game with relentless pacing and dozens of compelling characters – all from a desk job".[26] Simon Parkin writing for The New Yorker blog declared Papers, Please the top video game of 2013. He wrote: "Grim yet affecting, it’s a game that may change your attitude the next time you’re in line at the airport."[27] Some critics received the story very well; Ben "Yahtzee" Croshaw of The Escapist's series Zero Punctuation lauded the game for being a truly unique entry for 2013 and even made it one of his top five games for that year; he cited the game's morality as his reasoning by explaining that "[Papers, Please] presents constant moral choices but makes it really hard to be a good person... while you could waive the rules to reunite a couple, you do it at the expense of your own family... You have to decide if you want to create a better world or just look after you and yours." [28] Wired listed Papers, Please as their top game for 2013, recognizing that the game's title, often coupled with the Hollywood representation of Nazi officials stopping people and demanding to see their identification,[29] alongside the drab presentation captured the ideas of living as a lowly worker in a police state,[30] Some critics reacted against the paperwork gameplay. Stephanie Bendixsen from the ABC's game review show Good Game found the game "tedious", commenting "while I found the issues that arose from the decisions you are forced to make quite interesting, I was just so bored that I just struggled to go from one day to the next. I was torn between wanting to find out more, and just wanting it all to stop."[31] Papers, Please is considered by several journalists as an example of video games as an art form.[32][33] Papers, Please is frequently categorized as an "empathy game", a type of role-playing game that "asks players to inhabit their character's emotional worlds", as described by Patrick Begley of the Sydney Morning Herald,[34] or as described by Pope himself, "other people simulators".[35] Pope noted that he had not set out to make an empathy game, but the emotional ties created by his scenarios came about naturally from developing the core mechanics.[36] Papers, Please won the Seumas McNally Grand Prize, "Excellence in Narrative", and "Excellence in Design" awards at the 2014 Independent Games Festival Awards and was nominated for the Nuovo Award.[37][38] The title also won the "Innovation Award" and "Best Downloadable Game" at the 2014 Game Developers Choice Awards.[39] The game won "Best Simulation Game" and was nominated in the categories of "Best Game", "Game Design", and "Game Innovation" at the 2014 BAFTA Video Games Awards.[40][41] As of March 2014, at the time of the BAFTA awards, Pope stated that the game had sold 500,000 copies.[4] By August 2016, three years from release, Pope stated that more than 1.8 million copies had been sold across all platforms.[42] An easter egg in Uncharted 4 (2016) makes reference to the fictional country of Arstotzka.[43]