Student project

General description

The student (or group of up to 3 students) is expected to design, develop, and present a solution based on Machine Learning to one problem among a set of problems provided by the teachers.

Output

The student (or group of students) will deliver a single document (as a pdf file), within one week before the exam date, by email, to the teacher. The document maximum lenght is fixed at 4 pages (excluding references), if the document is drafted according to this LaTex tamplate, or 1200 words (including every word: title, authors, ...), otherwise.

The document will contain (not necessarily in the following order):

  • the problem statement
  • a description of one or more performance indexes able to capture the degree to which a(ny) solution solves the problem, or some other evaluation criteria
  • a description of the proposed solution, from the algorithmic point of view, with scarce focus on the tools used to implement it
  • a description of the experimental evaluation of the solution, including
    • a description of the used data, if any
    • a description of the experimental procedure and the comparison baseline, if any
    • the presentation and the discussion of the results
The students are allowed (and encouraged) to refer to existing (technical/scientific/research) literature for shrinking/omitting obvious parts of the description, if appropriate.
The students are not required to deliver any source code or other material. However, if needed for gaining more insights on their work, the students will be asked to provide more material or to answer some questions.
If the project has been done by a group of students, the document must show, for each student of the group, which (at least one) of the following activities the student took part in:
  • problem statement
  • solution design
  • solution development
  • data gathering
  • writing

Evaluation

The teachers will evaluate the project output on a 0-33 scale.
Part of the score (up to 3 points), is determined statically and independently from the document content, as follows:
  • +1, if the project has been done by a single student
  • from +0 to +2, depending on which problem (among the teachers-provided set) has been chosen by the student (see below)
The remaining 30 points are assigned according to these criteria:
  • clarity (from 0 to 15): is the document understandable and easy to read? is the length appropriate? are all non-obvious design choices explicited? is the solution/experimental campaign repeatable/reproducible basing on the provided description?
  • technical soundness (from 0 to 10): are the problem statement, evaluation criteria, evaluation procedure sound? are design choices motivated experimentally, with references, or by other means? are conclusions and findings actually supported by results?
  • results (from 0 to 5): does the solution effectively /efficiently solve the problem? is there a baseline which is improved in some way?
Note that the students' solution is not required to exhibit some degree of novelty (i.e., to advance the state of the art of the specific research field). However, student are expected not to simply "cut-and-paste" an existing (research) project.
Note that, depending on the chosen problem, there could be more or less freedom on some aspect: e.g., problem statement, data gathering, and so on.
If the project has been done by a group of students, each student will be graded (for the project part) according to both the overall project score and the student contribution, desumed from the activities she/he actually carried on, according to what specified in the document (see above).

Problems

Leaf identification (+0 score)

Material is available here (data), with an extended description of the data here.
The goal is to propose a method for leaf identification based on the provided leaf attributes and using a proper unsupervised or supervised learning tool.

Fraud detection (+0 score)

Material is available here.
The dataset contains transactions made by credit cards by european cardholders and that occurred in two days in September 2013. This dataset presents an extreme class imbalance, being 492 the frauds out of 284,807 transactions. The dataset contains these features: ‘Time' represents the seconds elapsed between each transaction and the first transaction in the dataset; 'Amount' is the transaction amount; V1, V2, ..., V28 are the principal components obtained with PCA; Class' takes value 1 in case of fraud and 0 otherwise. Due to confidentiality issues, the original features and more background information about the data are not provided.
Goal is to set up proper tools (at the data and/or learning and/or validation level) to predict frauds taking into account the high class imbalance.

Bitcoin value prediction (+0/+1 score)

Optional material for this problem is available here: the dataset consists of historical bitcoin market data at 1-min intervals for select bitcoin exchanges where trading takes place.
The goal is to propose a method for forecasting the market value of a cryptocurrency (bitcoin), on a daily basis. That is, a prediction for the average value of the next day has to be produced by the method at latest at midnight of the previous day.  Possibly, exogenous data (i.e., data different from past values of the cryptocurrency itself) can be used for the prediction (e.g., Twitter posts): depending on the usage of exogenous data, the bonus portion for this problem can be up to 1.

Syntax-based extractor learning (+1 score)

Material is available as a zip file at the bottom of this page.
The goal is to propose a method for learning an extractor of syntax-based entities from examples of desired behavior.
For example, given a text where a user manually highlighted the dates, the method should learn an extractor of dates, applicable to any text.
A learned extractor, receives as input a text t and extracts (i.e., gives as output) a list of substrings of t that it deems to match the syntax pattern inferable from the examples. Here substring has to be intended as a localized portion of the text, i.e., the extractor may extract two substrings which have equal content but appear in different places in t.
The learning method should receive as input one or more pieces of text, each text t along with the list of all the substrings that should be extracted from t.
A set of example problems are available in the data directory. The problems may or may not be used. Each problem is defined in terms of a single file of text. The name of the file is composed of two pieces separated by a dash: the first piece roughly describes the nature of the text (e.g., log, HTML "code", email headers, ...); the second piece roughly describes the nature of the syntax-based entities to be extracted (e.g., dates, URLs, phone numbers, ...).  In the first line of the file, there is a regular expression: when applied to the remaining part of the text, the first capturing group (0 based indexing) matches all and only the corresponding syntax-based entities.

References

  1. Bartoli, Alberto, et al. "Inference of regular expressions for text extraction from examples." IEEE Transactions on Knowledge and Data Engineering 28.5 (2016): 1217-1230.

Citation relevance (+2 score)

There is no material for this problem.
The goal is to build a tool which, given a research paper A citing a research paper B, gives an estimate of the relevance of the citation.
Intuitively, a citation is relevant if the content of paper B is in some way useful for understanding and/or putting in a context the content of paper A.

References

  1. Bai, Xiaomei, et al. "Identifying Anomalous Citations for Objective Evaluation of Scholarly Article Impact." PloS one 11.9 (2016): e0162364.
  2. Valenzuela, Marco, Vu Ha, and Oren Etzioni. "Identifying meaningful citations." Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015.
  3. https://www.microsoft.com/cognitive-services/en-us/academic-knowledge-api
  4. http://dblp.uni-trier.de/faq/13501473
ċ
extraction.zip
(713k)
Eric Medvet,
Dec 12, 2017, 11:42 PM