The "Scientific Community Game" work by Ahmed Abdelmeged and Karl Lieberherr defines an approach to model formal science and to define a set-up for social networks for formal science to be used for advancing and teaching formal science. The description below describes our vision, independent of the various applications to crowdsourcing and solving computational problems. For further information, see: SCG-Publications and specifically Thesis Proposal by Ahmed Abdelmeged, Crowdsourcing of Computational Problem Solving, CCIS Technical Report NU-CCIS-2013-10, March 2013.

We describe a generic game, called the Scientific Community Game (SCG), for the Popperian Scientific Method. Popper, one of the most influential philosophers of science of the previous century, promoted in ``Conjectures and Refutations'' the idea that each claim should have a description how to refute the claim. We use relative refutation, not absolute refutation.

SCG is a game designed to encourage constructively-solvable disputes about a predefined set of claims. The reason we want to encourage those constructively-solvable disputes is that they help advance and focus the scientific discourse and learning.

We consider a lab that formulates claims about the science. We have several lab users, called scholars, who form their opinions about the claims of the lab. Then they meet in pairs on the web and engage in a scientific discourse to determine who has likely the correct opinion. This determination is done by playing several binary games in which the players assume the position they believe is true.

An example of a domain that is sufficiently well understood is robotics. To program your robot and see it in action you don't need a real robot but the Robot Operating System (ROS) with Gazebo and a some libraries, such as a physics simulation library. See http://www.ros.org/wiki/ and http://gazebosim.org/. ROS and Gazebo provide a framework to use SCG to develop and teach robotics algorithms and the integration of various robotics components.

The claims are grouped into claim families that we call labs. The purpose of the CAG game is to (1) ask the scholars s1 and s2 to show a personal performance of the skills needed to defend the true claims in the lab (2) evaluate the scholars with respect to their skills relevant to the lab (3) bring at least one of the scholars into a ``personalized'' contradiction. This does not mean that the position of the scholar who got into the contradiction is false. But it is an indication that it might be false casting doubts on the skills of the scholar who took the position. The personalized contradiction has the flavor: you predict an outcome of an experiment connected to the claim and involving the two scholars but when the experiment is carried out the predicted outcome does not happen.

During the game the scholars try to push each other into contradictions and the one who succeeds, gets a point. The winner teaches a lesson to the other scholar because she or he was self-contradictory. We use the points a scholar collects as a first approximation of the strength of the scholar. See our SCG-strength algorithm that gives the details of how we take the strength of the opponents into account. The two scholars p1 and p2 play two roles: p1 takes role r1 and p2 takes role r2, where r1 and r2 are either the verifier or falsifier roles. We take a personalized interpretation of defense and refutation which is with respect to two scholars p1 and p2. p1 and p2 assume the roles verifier or falsifier and then the game determines the winner which gives evidence that the position taken is correct (but it does not prove that the position taken is correct). Special attention is given to the situation when one scholar is forced, e.g., when both scholars want to be verifiers.

As a word of caution, SCG is very useful and broadly applicable, but it is not a panacea: SCG has the garbage-in garbage-out property. When the scholars in SCG are not giving their best, they will create a knowledge base of claims which has a minimal value. But it takes only one strong scholar to significantly improve the quality of the knowledge base.

Science is modeled by labs and lab relations. Lab relations are the tool to modularize labs and break them down into components. Labs are based on a computable model (structure). Each lab contains a family of related claims that are closed under negation. Central is a Contradiction-Agreement Game abbreviated CAG. Each claim c has a game CAG(c,s1,s2,p1,p2) associated with it that involves two scholars s1 and s2 and their positions p1 and p2. p1 and p2 are either verifier or falsifier. If s1 believes the claim is true, s1 will choose verifier as s1's position and if s2 believes that c is false, s2 will choose falsifier as s2's position. CAG(c,s1,s2,

CAG encourages innovation because forced scholars can score while their adversary cannot. This provides an incentive for forced players to win even though they are forced to take positions that are often contradictory to their own intuition as well as to the predominant beliefs.

CAGs ensure that some form of progress is taking place either as an update to the player scores or that a potentially correct contribution has been made. In the first case, the loser is receiving targeted feedback and in the second case, the community benefits from the potentially correct contribution.

We define: !verifier = falsifier, !falsifier = verifier. p, p1, p2 are variables that are of type position = (verifier,falsifier).

CAG(c,s1,s2,p1,p2) returns a pair: (payoff(s1),payoff(s2)) if p1!=p2. The scholar with the higher payoff is the winner. If p1=p2, CAG returns two pairs. CAG(c,s1,s2,p1,p1)= CAG(c,s1,s2,p1,!p1) followed by CAG(c,s1,s2,!p1,p1).

The CAG has several interesting properties. We assume that s1 and s2 play the same moves on both games. CAG(c,s1,s2,p1,p2)=CAG(!c,s2,s1,!p1,!p2).

Game CAG(c,s1,s2,p,!p) must have the following property: If both scholars could freely choose their position: Game CAG(c,s1,s2,p,!p)=(1,0) means that s1 has pushed s2 into a contradiction and s1 gets a point. Game CAG(c,s1,s2,p,!p)=(0,1) means that s2 has pushed s1 into a contradiction and s2 gets a point. The situation is different when the second scholar is forced.

SG(c,s1,s2,p,!p(forced))=(0,0). The justification is that the forced scholar is not penalized for losing a lost position.

SG(c,s1,s2,p,!p(forced))=(,0,1).

s1 defends claim c against s2: SG(c,s1,s2,verifier,falsifier)=(s1,1,0).

s1 refutes claim c against s2: SG(c,s1,s2,falsifier,verifier)=(1,0).

Defending a claim c = refuting !c: SG(c,s1,s2,p,!p)=(1,0) iff SG(!c,s1,s2,!p,p)=(1,0).

In summary, it is enough to consider the following three kinds of games: SG(c,s1,s2,p,!p) and SG(c,s1,s2,p,!p(forced)) and CAG(c,s1,s2,p,p). What is important is whether the scholars assume different roles or not and whether a player is forced.

Page edited by Ahmed Abdelmeged and Karl Lieberherr.