KDD 2019 | Policy Learning for Malaria Control

Getting Started in Python

Example starter code*

*Updated 29th April

Please use the following class template as a pattern to encapsulate your submission.

You may modify it extensively, however you must:

  • permit the environment object to be passed in the class initialization, and
  • implement the "generate" method to return the best policy developed by your algorithm. In this template, you are free to add (but not remove) methods, parameters, etc. and also to set defaults for any of your additional parameters which permit your code to perform at its best. Only the generate method will be called during the evaluation process, and no parameters will be passed.
from netsapi.challenge import *
# For a given environment, evaluate a policy by applying its evaluateReward method

class CustomAgent():
    def __init__(self, environment):
        self.environment = environment

    def generate(self):
        best_policy = None
        best_reward = -float('Inf')
            # ask for a set of random candidate solutions to be evaluated
            policies = [{'1':[0.55, 0.7],'2':[0,0],'3':[0,0],'4':[0,0],'5':[0,0]}]
            rewards = self.environment.evaluateReward(policies)
            best_policy = policies[0]
            best_reward = rewards[0]
        except (KeyboardInterrupt, SystemExit):
        return best_policy, best_reward

Example submission: test.py

from sys import exit, exc_info, argv
import numpy as np
import pandas as pd

!pip3 install git+https://github.com/slremy/netsapi --user --upgrade

from netsapi.challenge import *

class CustomAgent:
    def __init__(self, environment):
        self.environment = environment

    def generate(self):
        best_policy = None
        best_reward = -float('Inf')
        candidates = []
            # Agents should make use of 20 episodes in each training run, if making sequential decisions
            for i in range(20):
                policy = {}
                for j in range(5): #episode length
            rewards = self.environment.evaluatePolicy(candidates)
            best_policy = candidates[np.argmax(rewards)]
            best_reward = rewards[np.argmax(rewards)]
        except (KeyboardInterrupt, SystemExit):
        return best_policy, best_reward
EvaluateChallengeSubmission(ChallengeSeqDecEnvironment, CustomAgent, "example.csv")


I want to know more about Malaria? 



What are ITNs and IRS? 


Insecticide Treated Nets (ITNs) and Indoor Residual Spraying (IRS) are among the most studied interventions in modelling malaria transmission 


Where can I find more information about Reinforcement Learning? 


It is a subject covered in terms of our material in the tutorial but you will find extensive reading materials and many useful blogs based on your level and interest. 


What malaria setting is being modelled? 


For the purposes of this challenge we have developed our own model for an abstract location, to avoid any distraction outside of the KDD cup environment. And preventing any advantage which could be provided from background material.

Policy Learning for Malaria Control

Malaria is thought to have had the greatest disease burden throughout human history, while it continues to pose a significant but disproportionate global health burden. With 50% of the world’s population at risk of malaria infection.  Sub Saharan Africa is most affected, with 90% of all cases.

Through this KDD Cup|Humanity RL track competition we are looking for participants to apply machine learning tools to determine novel solutions which could impact malaria policy in Sub Saharan Africa. Specifically, how should combinations of interventions which control the transmission, prevalence and health outcomes of malaria infection, be distributed in a simulated human population.

This challenge has been framed as a Reinforcement Learning problem, participants are expected to submit high performing solutions to the sequential decision making task. For this competition, actions receive stochastic and delayed rewards, which are resource constrained by both the monetary and computational costs associated with implementing and observing the impact of an action.


Submissions are encouraged from participants who may not have previous experience in reinforcement learning problems, reading through the materials we hope you may see the necessity for contributions to inform decision making for this complex real-world problem.

Feedback phase 15th April

·      Learn: Read the Getting started code in Python, Follow Tutorials, Blogs and Engage in the Discussion Forum

·      Test: Run the example code and generate a submission file to post a Public leaderboard score

·      Build: Iterate, Improve and Refine your own code and make sure you’ve scored in the top 100 of the Public leaderboard. Along with a top 100 score you will have to submit your code to proceed to the Check phase.

·      Deadline 00:01 UTC 30th June


Check phase 30th June


·      Top 100 participants ranking on the Public leaderboard from the Feedback phase will have access to new the Check phase environment

·      Build: There is no Public Leaderboard in the Check phase, the private dashboard remains to continue to Iterate, Improve and Refine your new Check phase submission. To be considered in the Verification Phase you must submit your final Check phase code

·      Deadline 00:01 UTC 7th July



Verification phase 7th July


·      Check phase submissions will be run, validated and verified by the organisers on the Verification phase environment

·      Top 10 Winners of the competition will be generated. And a public ranking/scoring of the Check phase submissions will be posted on the platform.

·      Winners Announced 20th July


Reward : $25K (Sponsor KDD)

Timeline (UTC)

Apr 15th, 2019: Competition Launch. Participants can register, join the discussion forum and form teams around the Challenge.

Jun 30th, 2019: End of the Feedback Phase, top 100 contestants migrated automatically to Check Phase, prompted to submit code

Jul 7th, 2019: End of the Check Phase, organizers start code verification.

Jul 20th, 2019: Announcement of the KDD Cup|Humanities Track Winner.

Aug 4th, 2019: Beginning of KDD 2019.

  • About KDD 2019 Conference

    The annual KDD conference is the premier interdisciplinary conference bringing together researchers and practitioners from data science, data mining, knowledge discovery, large-scale data analytics, and big data.

    KDD 2019 Conference:


    August 4 - 8, 2019

    Anchorage, Alaska USA

    Dena’ina Convention Center and William Egan Convention Center


    About Other 2019 KDD Cup Competitions

    KDD Cup is the annual Data Mining and Knowledge Discovery competition organized by ACM Special Interest Group on Knowledge Discovery and Data Mining, the leading professional organization of data miners. SIGKDD-2019 will take place in Anchorage, Alaska, US from August 4 - 8, 2019. The KDD Cup competition is anticipated to last for 2-4 months, and the winners will be notified by mid-July 2019. The winners will be honored at the KDD conference opening ceremony and will present their solutions at the KDD Cup workshop during the conference. 


    In KDD Cup 2019, there are three competition tracks:


    1.    Automated Machine Learning Competition Track (Auto-ML Track) 

    2.    Regular Machine Learning Competition Track (Regular ML Track)

    3.    “Research for Humanity” Reinforcement Learning Competition Track (Humanity RL Track) [This Competition]

    About KDD Cup Chairs

    Taposh Dutta-Roy (Kaiser Permanente)

    Wenjun Zhou (University of Tennessee Knoxville)

    Iryna Skrypnyk (Pfizer)



Submission method:

Code must be submitted through the competition platform by the deadlines for evaluation. The top 100 participants on the public leaderboard of test submissions will be given access to the final submission phase. Using multiple accounts to increase the number of submissions in NOT permitted. The entries must be formatted as specified on the Competition page. In case of any problems, send email to support@hexagon-ml.com. Competition related questions can be posed at our discussion forum, KDD 2019 Policy Learning For Malaria Elimination.


The 10 top ranking final submissions for the KDD Cup|Humanities Track Competition may qualify for cash prizes.


There is no other publication requirement. The winners will be required to make their code publicly available under an OSI-approved license such as, for instance, Apache 2.0, MIT or BSD-like license, if they accept their prize, within a week of the deadline for submitting the final results. Entries exceeding the time budget will not qualify for prizes. In case of a tie, the prize will go to the participant who submitted his/her entry first. Non winners or entrants who decline their prize retain all their rights on their entries and are not obliged to publicly release their code.



To receive announcements and be informed of any change in rules, the participants must provide a valid email to the challenge platform

Conditions of participation:

Participation requires complying with the rules of the Competition. Prize eligibility is restricted by US government export regulations. The organizers, sponsors, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or the Competition design giving him (or her) an unfair advantage are excluded from participation. A disqualified person may submit one or several entries in the Competition and request to have them evaluated, provided that they notify the organizers of their conflict of interest. If a disqualified person submits an entry, this entry will not be part of the final ranking and does not qualify for prizes. The participants should be aware that the organizers reserve the right to evaluate for scientific purposes any entry made in the challenge, whether or not it qualifies for prizes.


The Winners will be invited to attend a workshop organized in conjunction with KDD and contribute to the proceedings.


The participants must register to the platform and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members. Teams or solo participants registering multiple times to gain an advantage in the competition may be disqualified.


The participants who do not present their results at the workshop can elect to remain anonymous by using a pseudonym. Their results will be published on the leaderboard under that pseudonym, and their real name will remain confidential. However, the participants must disclose their real identity to the organizers to claim any prize they might win. 


Rank Team Score Count Submitted Date
1 jingw2 510.42360308 11 May 15, 2019, 6:44 p.m.
2 NCKU CSIE bill wei 482.69204540 7 May 22, 2019, 7:24 a.m.
3 vlad 461.09484206 6 May 15, 2019, 4:10 a.m.
4 Testtesttest 400.47680815 1 May 22, 2019, 12:12 p.m.
5 shark 377.23278589 9 May 22, 2019, 10:21 p.m.
6 laoqiongsuan 374.58534620 2 May 21, 2019, 2:59 a.m.
7 Paulina 372.39671318 12 May 22, 2019, 6:05 a.m.
8 AndyLee 367.16869774 6 May 24, 2019, 12:34 a.m.
9 believe 347.47084573 2 May 22, 2019, 2:34 a.m.
10 et373 330.23897515 3 May 20, 2019, 12:29 a.m.
11 Karab 324.76906836 4 May 22, 2019, 4:43 a.m.
12 LOLs 321.82541804 3 May 21, 2019, 3:42 p.m.
13 Please tell us competition details 262.99157936 2 May 9, 2019, 10:14 p.m.
14 Zachary 240.27731760 3 May 22, 2019, 11:57 a.m.
15 ka2yama 233.61100127 2 May 16, 2019, 7:29 a.m.
16 Chia 193.75032820 4 May 22, 2019, 11:37 a.m.
17 LahaAle 190.99766794 2 May 21, 2019, 1:57 p.m.
18 yutaro0122 188.94431124 1 May 20, 2019, 6:38 a.m.
19 gnemoto 183.46269340 4 May 16, 2019, 4:16 a.m.
20 DeepBlueAI 179.53279228 3 May 21, 2019, 8:41 p.m.
21 mastermind99 176.70695794 7 May 14, 2019, 11:02 p.m.
22 Bonum 174.79004519 3 May 23, 2019, 7:54 a.m.
23 Tensorflaws 174.15693342 2 May 22, 2019, 6:39 a.m.
24 NCKU si tong 173.71580241 1 May 22, 2019, 12:11 a.m.
25 oetbent 173.45583181 7 May 14, 2019, 9:46 a.m.
26 kimuragt 172.53509911 1 May 17, 2019, 7:06 p.m.
27 guest1 167.36329676 1 May 14, 2019, 9:50 a.m.
28 Sherry 167.36329676 2 May 15, 2019, 12:22 a.m.
29 major_tom 167.36329676 2 May 16, 2019, 6:47 a.m.
30 m_nakamura 167.36329676 1 May 16, 2019, 8:02 p.m.
31 NCKU CSIE 167.36329676 2 May 17, 2019, 6:04 a.m.
32 tduttaroy 167.36329676 1 May 23, 2019, 6:12 a.m.
33 Shashwat 164.32415140 1 May 16, 2019, 3:49 a.m.
34 fddfdd 158.83544538 1 May 22, 2019, 11:54 a.m.
35 NCKU Jerry 123.46349855 1 May 22, 2019, 10:34 a.m.
36 zunzun 109.89073902 1 May 9, 2019, 5:31 a.m.
37 tarai 101.27567986 3 May 14, 2019, 1:31 a.m.
38 Apoorv 77.99153082 1 May 19, 2019, 1:49 a.m.
39 Ashutosh 71.36875808 2 May 16, 2019, 5:34 a.m.
40 Larr 61.89057914 1 May 13, 2019, 4:17 a.m.
41 yuying 59.17428125 1 May 20, 2019, 10:41 p.m.
42 xingzoudeyao 3.30617133 1 May 13, 2019, 1:18 a.m.

Data License

This competition is brought to you by IBM and University of Oxford. 


In-App Messaging







Note: This competition is hosted by us in collaboration with IBM and University of Oxford. For questions about the content for this competition, please contact Competition Host . For questions regarding the website features please contact support@hexagon-ml.com.