Student project

General description

The student (or group of up to 3 students) is expected to design, develop, and present a solution based on Machine Learning to one problem among a set of problems provided by the teachers.

Output

The student (or group of students) will deliver a single document (as a pdf file), within one week before the exam date, by email, to both the teachers (prof. Matilde Trevisani and me). The document maximum lenght is fixed at 4 pages (excluding references), if the document is drafted according to this LaTex tamplate, or 1200 words (including every word: title, authors, references, ...), otherwise.

The document will contain (not necessarily in the following order):

  • the problem statement
  • a description of one or more performance indexes able to capture the degree to which a(ny) solution solves the problem, or some other evaluation criteria
  • a description of the proposed solution, from the algorithmic point of view, with scarce focus on the tools used to implement it
  • a description of the experimental evaluation of the solution, including
    • a description of the used data, if any
    • a description of the experimental procedure and the comparison baseline, if any
    • the presentation and the discussion of the results
The students are allowed (and encouraged) to refer to existing (technical/scientific/research) literature for shrinking/omitting obvious parts of the description, if appropriate.
The students are not required to deliver any source code or other material. However, if needed for gaining more insights on their work, the students will be asked to provide more material or to answer some questions.
If the project has been done by a group of students, the document must show, for each student of the group, which (at least one) of the following activities the student took part in:
  • problem statement
  • solution design
  • solution development
  • data collection
  • writing

Evaluation

The teachers will evaluate the project output on a 0-33 scale.
Part of the score (up to 3 points), is determined statically and independently from the document content, as follows:
  • +1, if the project has been done by a single student
  • from +0 to +2, depending on which problem (among the teachers-provided set) has been chosen by the student (see below)
The remaining 30 points are assigned according to these criteria:
  • clarity (from 0 to 15): is the document understandable and easy to read? is the length appropriate? are all non-obvious design choices explicited? is the solution/experimental campaign repeatable/reproducible based on the provided description?
  • technical soundness (from 0 to 10): are the problem statement, evaluation criteria, evaluation procedure sound? are design choices motivated experimentally, with references, or by other means? are conclusions and findings actually supported by results?
  • results (from 0 to 5): does the solution effectively /efficiently solve the problem? is there a baseline which is improved in some way?
Note that the students' solution is not required to exhibit some degree of novelty (i.e., to advance the state of the art of the specific research field). However, student are expected not to simply "cut-and-paste" an existing (research) project.
Note that, depending on the chosen problem, there could be more or less freedom on some aspects: e.g., problem statement, data collection, and so on.
If the project has been done by a group of students, each student will be graded (for the project part) according to both the overall project score and the student contribution, desumed from the activities she/he actually carried on, according to what specified in the document (see above).

Problems

The list of problems might be updated during the course.

Leaf identification (+0 score)

Material is available here (data), with an extended description of the data here.
The goal is to propose a method for leaf identification based on the provided leaf attributes and using a proper unsupervised or supervised learning tool.

Citation relevance (+2 score)

There is no material for this problem.
The goal is to build a tool which, given a research paper A citing a research paper B, gives an estimate of the relevance of the citation.
Intuitively, a citation is relevant if the content of paper B is in some way useful for understanding and/or putting in a context the content of paper A.

References

  1. Bai, Xiaomei, et al. "Identifying Anomalous Citations for Objective Evaluation of Scholarly Article Impact." PloS one 11.9 (2016): e0162364.
  2. Valenzuela, Marco, Vu Ha, and Oren Etzioni. "Identifying meaningful citations." Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015.
  3. https://www.microsoft.com/cognitive-services/en-us/academic-knowledge-api
  4. http://dblp.uni-trier.de/faq/13501473
ċ
extraction.zip
(713k)
Eric Medvet,
Nov 4, 2018, 11:53 PM