Subproject 2: Bidding Contests

Introduction

10 years from now, companies who need problem solving software will use SCG artificial markets to determine the winning bidder. The reasons are (1) The company asking for bids gets working software out of the bidding process. (2) The winning team will have to prove itself in a small number of dynamic competitions in the domain of interest to the company. (3) The bidders help each other to improve the winning prototype. Although there is competition there is also collaboration. (4) Participating in the bidding competitions is less work for the bidding teams than writing a full bid. This saves money for the economy because less waste is produced. In addition, the bidding teams are paid a small amount for their participation because the company that calls for bids will get working software. (5) Teams from all over the world can participate because the competitions run on the web. This helps to spread knowledge and wealth more equally around the world. (6) The company accumulates valuable information about the domain of interest by studying the undiscounted beliefs at the end of the contests.

The bidding process works as follows (this process is based on a lunch discussion in early September 2009 with Walter Huersch from Zuehlke Engineering AG, Zurich, Switzerland). The company asking for bids needs problem solving software for MAX CSP. The company organizes a short sequence of contests for a very simple version of MAXCSP, called TBall MAXCSP. It publishes the TBall MAXCSP requirements and the belief language to be used. The belief language makes statements about the TBall MAXCSP domain and the undiscounted beliefs after the contests will give the company valuable information about the domain. In TBall only one kind of relation of arity 3 is used in the constraints. TBall MAXCSP considerably simplifies the bidding effoert yet it gives the company information about productive teams in this domain. Daily competitions (full round-robin) will be held for 5 business days, giving bidding teams the opportunity to improve their agent. The winning bidder will be the one with the most wins over all 5 competitions.

This subproject will already result in our first agent competition (the simpler game: Classic SCG CSP T Ball) thanks to the functioning agent "babies" that the teaching staff has prepared for you. Section 5.3 in the requirements document explains the different versions of the Classic SCG CSP game. The agents can communicate in a mutually understandable way and they can look through the store to find out what is for sale. What they cannot do well yet is to recognize good challenges to buy and types of challenges to offer in the market at a price that will produce a profit.

In class we did an experiment: to perform a requirements analysis of the "make money with challenge C" requirement, we played with a suitably bent coin. We determined that the average satisfaction ratio among all assignments generated with the suitably bent coin is a certain number in [0,1], called break-even(C). We use the fsat(I, J): the fraction of satisfied constraints in problem instance I and assignment J.

Our agent can exploit this result in three ways:

1) If an agent sees a challenge C on sale at a price p < break-even(C), the agent should buy it in the hope that it quickly can find, for any given problem instance I for C, an assignment J with fsat(I, J) >= break-even(C). The agent knows such an assignment must exist and will look for the assignment by flipping a suitably bent coin.

2) The second way to use the result is to never offer a challenge C below the price break-even(C) because we know that for all problem instances I of challenge C there is an assignment J such that fsat(I, J) >= break-even(C).

3) The third way to use the result is to solve a given problem instance using the optimally biased coin.

All Boolean relations used in the game are of arity 3 (of three arguments); there are 256 such relations, which we number 0..255. Important hint: Use Ahmed's Relation class, included in the scglibs.jar distributed with the player. The documentation can be found here, and the source is available in the course source directory

Important methods:

reduce(int, int) : used to compute the quality of a solution
q(int) : used to compute the polynomials that need to be maximized.
The Relation class will greatly simplify your work. An important detail, the relation: 22(a, b, c) is encoded as:
     c b a
     -----
     0 0 0
     0 0 1 1
     0 1 0 1
     0 1 1
     1 0 0 1
     1 0 1
     1 1 0
     1 1 1
Notice the reverse order of the variables.


Part 1: 15 points


Compute an upper bound on the price of a challenge C in Classic SCG CSP T Ball, at which you would accept the challenge. You would accept C if the price is below this bound because you know there exists an solution with which you will make a profit.

More formally, we define break-even(C) for a challenge C to be the largest real number in [0,1] for which we can prove that for all problem instances I of challenge C there exists a solution J with fsat(I, J) >= break-even(C).

Solution: Map C to a polynomial f[C] in b, where b is the bias of a bent coin used to assign variables in the solution.

f[C](b) = the expected fraction of satisfied constraints when assignments are generated randomly using a bent coin with bias b.

break-even(C) = f[C](b_max), where b_max is the value of b where f[C](b) is maximum in [0,1].


Part 2: Accept Task: 3 points


Add intelligence to the accepting task of your baby agent that accepts all challenges C with a price p < break-even(C), provided you have enough funds available.


Part 3: Offer Task: 3 points


Add intelligence to the offering task of your baby agent that only offers challenges C at a price p > break-even(C). The hope is that once such a challenge is accepted, your agent agent will produce a problem instance for which less than break-even(C) can be satisfied. We will focus on this task (providing) in later subprojects. But some of you might already have an idea of what should be done.


Part 4: Solve Task: 4 points


Add intelligence to the solving task of your agent that uses an optimally biased coin to generate the solution assignments. Generate a few random assignments and take the best one.


Part 5: Provide Task: 4 points


The providing task needs to create more complex problem instances. If you don't have any better ideas, you can use this problem instance (i.e., translate it into your providing task):

     R(v1 v2 v3)
     R(v1 v2    v4)
     R(v1 v2       v5)
     R(v1    v3 v4)
     R(v1    v3    v5)
     R(v1       v4 v5)
     R(   v2 v3 v4)
     R(   v2 v3    v5)
     R(   v2    v4 v5)

Our expectation is that if you add this knowledge to your agent, it will perform better than the original baby agent. We will include a plain baby agent in the competition for comparison.


Testing


It is important to unit test all aspects of your player. We expect you to put some "Brain Power" in writing test cases, files, and extra test classes. Please demonstrate this to us by writing a reasonable number of test cases, and documenting your software accordingly. This will be part of your grade, and also leads to good habits in the future.


Submission


You will turn in your player/agent to Blackboard as a Jar file containing only your source code. You will also place a runnable JAR of your player (without source) in /scratch/lieber/cs4500/f09/sep25.

The makeSubmission.sh script in the player distribution will create the needed archives for submission. Supposing your team name is "BryansTeam", run the script within your player's directory as:

     ./makeSubmission.sh "BryansTeam"
The source JAR BryansTeam_source.jar will be created and should be uploaded to Blackboard. The runnable JAR, BryansTeam.jar, should be copied to /scratch/lieber/cs4500/f09/sep25/BryansTeam.jar to help other teams in testing later. Remember that your source code will be made available to the class at a later time (i.e., write good comments, and be professional). The submitted (runnable) agents will be made available to the entire class so that you can run local competitions, check the results, observe the behavior of all agents, and learn from them.

Competitions will be held on the due date at midnight, started by an automated script. Players will be able to register (start running) around 6pm on the due date, which means the administrator will be up, accepting/registering participating players for the competition (more on that later). The competitions are expected to run in less than an hour, so the results should be available by the time you wake up the following day.

Important: be sure that the computer on which you run your agent is reachable from the Internet! We suggest that you use one of the CCIS Linux machines, and will be going over competition setup in the second Lab.

Acknowledgements

This kind of project would be impossible without the SCG software developed by Ahmed Abdelmeged, Bryan Chadwick and Alex Dubreuil and numerous students in my software development classes during the last 3 years. External financial support came from GMO and Novartis. The SCG software ensures that the competitions are fair and that the winning agent has the best problem solving algorithm for the given problem and belief language. The SCG software also provides the baby agents, eliminating the uninteresting programming tasks. The teams can just focus on adding "intelligence". DemeterF, produced by Bryan Chadwick, was very useful to generate code and make the implementation flexible and easily parallelizable for multi-core.