PhD Student
Computer Science Institute
University of Bonn

Profiles: LinkedInGoogle Scholar, DBLP, Research Gate, Twitter

Room A120
Römerstr. 164, 53117 Bonn
University of Bonn, Computer Science
esteves@cs.uni-bonn.de

Short CV


Diego Esteves is a PhD Student & Research Associate at the University of Bonn. Esteves’ research interests are in the area of Fact Checking and Information Retrieval.

Research Interests


  • Fact Checking
  • Natural Language Processing (NLP)
  • Machine Learning (ML) for NLP
  • Reproducible Research (RR)

Projects


  • DeFacto: Deep Fact Validation
  • MEX: Machine Learning Interchange Format
  • HORUS: Named Entity Recognition for Informal Text
  • Experimental Analysis of Class CS Problems

Teaching


Publications


2017

  • D. Esteves, R. Peres, J. Lehmann, and G. Napolitano, “Named Entity Recognition in Twitter using Images and Text,” in 3rd International Workshop on Natural Language Processing for Informal Text (NLPIT 2017), 2017.
    [BibTeX] [Download PDF]
    @InProceedings{estevesNERshort2017,
    Title = {Named {E}ntity {R}ecognition in {T}witter using {I}mages and {T}ext},
    Author = {Diego Esteves and Rafael Peres and Jens Lehmann and Giulio Napolitano},
    Booktitle = {3rd International Workshop on Natural Language Processing for Informal Text (NLPIT 2017)},
    Year = {2017},
    Bdsk-url-1 = {https://www.researchgate.net/publication/317721565_Named_Entity_Recognition_in_Twitter_using_Images_and_Text},
    Keywords = {horus ner 2017 esteves napolitano lehmann sda},
    Url = {https://www.researchgate.net/publication/317721565_Named_Entity_Recognition_in_Twitter_using_Images_and_Text}
    }

  • C. B. Neto, D. Kontokostas, G. Publio, D. Esteves, A. Kirschenbaum, and S. Hellmann, “IDOL: Comprehensive & Complete LOD Insights,” in 13th International Conference on Semantic Systems (SEMANTiCS 2017), 11-14 September 2017, Amsterdam, Holland, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{ciroIDOL2017,
    Title = {{IDOL}: {C}omprehensive \& {C}omplete {LOD} {I}nsights},
    Author = {Ciro Baron Neto and Dimitris Kontokostas and Gustavo Publio and Diego Esteves and Amit Kirschenbaum and Sebastian Hellmann},
    Booktitle = {13th International Conference on Semantic Systems (SEMANTiCS 2017), 11-14 September 2017, Amsterdam, Holland},
    Year = {2017},
    Bdsk-url-1 = {https://svn.aksw.org/papers/2017/SEMANTiCS_IDOL/public.pdf},
    Keywords = {idol 2017 esteves baron sda},
    Url = {https://svn.aksw.org/papers/2017/SEMANTiCS_IDOL/public.pdf}
    }

  • T. Soru, E. Marx, D. Moussallem, G. Publio, A. Valdestilhas, D. Esteves, and C. B. Neto, “SPARQL as a Foreign Language,” arXiv preprint arXiv:1708.07624, 2017.
    [BibTeX] [Download PDF]
    @Article{soru2017sparql,
    Title = {{SPARQL} as a Foreign Language},
    Author = {Tommaso Soru and Edgard Marx and Diego Moussallem and Gustavo Publio and Andr{\'e} Valdestilhas and Diego Esteves and Ciro Baron Neto},
    Journal = {arXiv preprint arXiv:1708.07624},
    Year = {2017},
    Bdsk-url-1 = {https://arxiv.org/html/1708.07624},
    Keywords = {qa 2017 esteves sda},
    Url = {https://arxiv.org/html/1708.07624}
    }

  • D. Esteves, D. Moussallem, T. Soru, C. B. Neto, J. Lehmann, A. N. Ngomo, and J. C. Duarte, “LOG4MEX: A Library to Export Machine Learning Experiments,” in Proceedings of the International Conference on Web Intelligence, New York, NY, USA, 2017, pp. 139-145. doi:10.1145/3106426.3106530
    [BibTeX] [Download PDF]
    @InProceedings{Esteves:2017:LLE:3106426.3106530,
    Title = {{LOG4MEX}: {A} {L}ibrary to {E}xport {M}achine {L}earning {E}xperiments},
    Author = {Diego Esteves and Diego Moussallem and Tommaso Soru and Ciro Baron Neto and Jens Lehmann and Axel-Cyrille Ngonga Ngomo and Julio Cesar Duarte},
    Booktitle = {Proceedings of the International Conference on Web Intelligence},
    Year = {2017},
    Address = {New York, NY, USA},
    Pages = {139--145},
    Publisher = {ACM},
    Series = {WI '17},
    Acmid = {3106530},
    Bdsk-url-1 = {http://delivery.acm.org/10.1145/3110000/3106530/p139-esteves.pdf?ip=131.220.9.176&id=3106530&acc=ACTIVE%20SERVICE&key=2BA2C432AB83DA15%2E56D2680C9BA0337E%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&CFID=985093927&CFTOKEN=37686417&__acm__=1506603052_78e3b532beedad6215f69820947e6d95},
    Doi = {10.1145/3106426.3106530},
    ISBN = {978-1-4503-4951-2},
    Keywords = {LOG4MEX, sda, esteves, lehmann, 2017, logging, machine learning experiments, metadata, ontology, provenance, software architecture},
    Location = {Leipzig, Germany},
    Numpages = {7},
    Url = {http://doi.acm.org/10.1145/3106426.3106530}
    }

  • J. C. Duarte, M. C. R. Cavalcanti, I. S. de Costa, and D. Esteves, “An Interoperable Service for the Provenance of Machine Learning Experiments,” in Proceedings of the International Conference on Web Intelligence, New York, NY, USA, 2017, pp. 132-138. doi:10.1145/3106426.3106496
    [BibTeX] [Download PDF]
    @InProceedings{Duarte:2017:ISP:3106426.3106496,
    Title = {An {I}nteroperable {S}ervice for the {P}rovenance of {M}achine {L}earning {E}xperiments},
    Author = {Julio Cesar Duarte and Maria Claudia Reis Cavalcanti and Igor de Souza Costa and Diego Esteves},
    Booktitle = {Proceedings of the International Conference on Web Intelligence},
    Year = {2017},
    Address = {New York, NY, USA},
    Pages = {132--138},
    Publisher = {ACM},
    Series = {WI '17},
    Acmid = {3106496},
    Bdsk-url-1 = {http://delivery.acm.org/10.1145/3110000/3106496/p132-duarte.pdf?ip=131.220.9.176&id=3106496&acc=ACTIVE%20SERVICE&key=2BA2C432AB83DA15%2E56D2680C9BA0337E%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&CFID=985093927&CFTOKEN=37686417&__acm__=1506602852_75cfe1de920a719516db3dec17b6dbba},
    Doi = {10.1145/3106426.3106496},
    ISBN = {978-1-4503-4951-2},
    Keywords = {data provenance, interoperability, machine learning, mex, esteves, sda, 2017, reproducible research},
    Location = {Leipzig, Germany},
    Numpages = {7},
    Url = {http://doi.acm.org/10.1145/3106426.3106496}
    }

  • R. Peres, D. Esteves, and G. Maheshwari, “Bidirectional LSTM with a Context Input Window for Named Entity Recognition in Tweets,” in Proceedings of the 9th International Conference on Knowledge Capture, 2017.
    [BibTeX]
    @inproceedings{peres2017nerpt,
    title={Bidirectional {LSTM} with a {C}ontext {I}nput {W}indow for {N}amed {E}ntity {R}ecognition
    in {T}weets},
    author={Rafael Peres and Diego Esteves and Gaurav Maheshwari},
    booktitle={Proceedings of the 9th International Conference on Knowledge Capture},
    year={2017},
    Keywords = {sda esteves 2017},
    organization={ACM}
    }

  • S. A. Coelho, D. Moussallem, G. C. Publio, and D. Esteves, “TANKER: Distributed Architecture for Named Entity Recognition and Disambiguation,” arXiv preprint arXiv:1708.09230, 2017.
    [BibTeX] [Download PDF]
    @Article{coelho2017tanker,
    Title = {{TANKER}: Distributed Architecture for Named Entity Recognition and Disambiguation},
    Author = {Sandro A. Coelho and Diego Moussallem and Gustavo C. Publio and Diego Esteves},
    Journal = {arXiv preprint arXiv:1708.09230},
    Year = {2017},
    Bdsk-url-1 = {https://arxiv.org/abs/1708.09230},
    Keywords = {horus ner 2017 esteves sda},
    Url = {https://arxiv.org/abs/1708.09230}
    }

  • T. Soru, D. Esteves, E. Marx, and A. N. Ngomo, “Mandolin: A Knowledge Discovery Framework for the Web of Data,” arXiv preprint arXiv:1711.01283, 2017.
    [BibTeX] [Download PDF]
    @article{soru2017mandolin,
    title={Mandolin: A {K}nowledge {D}iscovery {F}ramework for the {W}eb of {D}ata},
    author={Tommaso Soru and Diego Esteves and Edgard Marx and Axel-Cyrille Ngonga Ngomo},
    journal={arXiv preprint arXiv:1711.01283},
    year={2017},
    Keywords = {sda esteves 2017},
    Url= {https://arxiv.org/abs/1711.01283}
    }

2016

  • C. B. Neto, D. Esteves, T. Soru, D. Moussallem, A. Valdestilhas, and E. Marx, “WASOTA: What are the states of the art?,” in 12th International Conference on Semantic Systems (SEMANTiCS 2016), 12-15 September 2016, Leipzig, Germany (Posters & Demos), 2016.
    [BibTeX] [Abstract] [Download PDF]
    Presently, an amount of publications in Machine Learning and Data Mining contexts are contributing to the improvement of algorithms and methods in their respective fields. However, with regard to publication and sharing of scientific experiment achievements, we still face problems on searching and ranking these methods. Scouring the Internet to search state-of-the-art information about specific contexts, such as Named Entity Recognition (NER), is often a time-consuming task. Besides, this process can lead to an incomplete investigation, either because search engines may return incomplete information or keywords may not be properly defined. To bridge this gap, we present WASOTA, a web portal specifically designed to share and readily present metadata about the state of the art on a specific domains, making the process of searching this information easier.

    @InProceedings{wasota2016,
    Title = {{WASOTA}: {W}hat are the states of the art?},
    Author = {Ciro Baron Neto and Diego Esteves and Tommaso Soru and Diego Moussallem and Andre Valdestilhas and Edgard Marx},
    Booktitle = {12th International Conference on Semantic Systems (SEMANTiCS 2016), 12-15 September 2016, Leipzig, Germany (Posters \& Demos)},
    Year = {2016},
    Abstract = {Presently, an amount of publications in Machine Learning and Data Mining contexts are contributing to the improvement of algorithms and methods in their respective fields. However, with regard to publication and sharing of scientific experiment achievements, we still face problems on searching and ranking these methods. Scouring the Internet to search state-of-the-art information about specific contexts, such as Named Entity Recognition (NER), is often a time-consuming task. Besides, this process can lead to an incomplete investigation, either because search engines may return incomplete information or keywords may not be properly defined. To bridge this gap, we present WASOTA, a web portal specifically designed to share and readily present metadata about the state of the art on a specific domains, making the process of searching this information easier.},
    Keywords = {2016 mex baron ciro esteves moussallem soru marx valdestilhas aksw mole simba group_aksw},
    Url = {http://wasota.aksw.org/#/home}
    }

  • A. Lawrynowicz, D. Esteves, P. Panov, T. Soru, S. Dzeroski, and J. Vanschoren, “An Algorithm, Implementation and Execution Ontology Design Pattern,” Studies on the Semantic Web, 2016.
    [BibTeX]
    @article{lawrynowicz2016algorithm,
    title={An {A}lgorithm, {I}mplementation and {E}xecution {O}ntology {D}esign {P}attern},
    author={Agnieszka Lawrynowicz and Diego Esteves and Pance Panov and Tommaso Soru and Saso Dzeroski and Joaquin Vanschoren},
    journal={Studies on the Semantic Web},
    year={2016},
    keywords = {esteves}
    }

  • D. Esteves, P. N. Mendes, D. Moussallem, J. C. Duarte, A. Zaveri, J. Lehmann, C. B. Neto, I. Costa, and M. C. Cavalcanti, “MEX Interfaces: Automating Machine Learning Metadata Generation,” in 12th International Conference on Semantic Systems (SEMANTiCS 2016), 12-15 September 2016, Leipzig, Germany, 2016.
    [BibTeX] [Abstract] [Download PDF]
    Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into the problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: “What is the cleanest and lowest-impact solution to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?”. We introduce a novel and low impact methodology specifically designed for code built in that context, combining semantic web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: the MEX Vocabulary.

    @InProceedings{estevesMEX2016,
    Title = {{MEX} {I}nterfaces: {A}utomating {M}achine {L}earning {M}etadata {G}eneration},
    Author = {Diego Esteves and Pablo N. Mendes and Diego Moussallem and Julio Cesar Duarte and Amrapali Zaveri and Jens Lehmann and Ciro Baron Neto and Igor Costa and Maria Claudia Cavalcanti},
    Booktitle = {12th {I}nternational {C}onference on {S}emantic {S}ystems (SEMANTiCS 2016), 12-15 September 2016, Leipzig, Germany},
    Year = {2016},
    Abstract = {Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into the problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: ``What is the cleanest and lowest-impact solution to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?''. We introduce a novel and low impact methodology specifically designed for code built in that context, combining semantic web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: the MEX Vocabulary.},
    Bdsk-url-1 = {https://www.researchgate.net/publication/305143958_MEX_InterfacesAutomating_Machine_Learning_Metadata_Generation},
    Keywords = {mex 2016 sys:relevantFor:infai sys:relevantFor:bis hobbit projecthobbit esteves baron group_aksw lehmann sda mole moussallem MOLE},
    Url = {https://www.researchgate.net/publication/305143958_MEX_InterfacesAutomating_Machine_Learning_Metadata_Generation}
    }

2015

  • R. Speck, D. Esteves, J. Lehmann, and A. Ngonga Ngomo, “DeFacto – A Multilingual Fact Validation Interface,” in 14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Semantic Web Challenge Proceedings), 2015.
    [BibTeX] [Abstract] [Download PDF]
    The curation of a knowledge base is a key task for ensuring the correctness and traceability of the knowledge provided in the said knowledge. This task is often carried out manually by human curators, who attempt to provide reliable facts and their respective sources in a three-step process: issuing appropriate keyword queries for the fact to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. However, this process is very time-consuming, mainly due to the human curators having to scrutinize the web pages retrieved by search engines. This demo paper demonstrate the RESTful implementation for DeFacto (Deep Fact Validation) ???????? an approach able to validate facts in RDF by finding trustworthy sources for them on the Web. DeFacto aims to support the validation of facts by supplying the user with (1) relevant excerpts of web pages as well as (2) useful additional information including (3) a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. We also provide an extension for finding similar resources obtained from the Linked Data, using the sameas.org service as backend. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate the time frame within which a fact was valid.

    @InProceedings{defactorest,
    Title = {De{F}acto - {A} {M}ultilingual {F}act {V}alidation {I}nterface},
    Author = {Ren{\'e} Speck and Diego Esteves and Jens Lehmann and Axel-Cyrille {Ngonga Ngomo}},
    Booktitle = {14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Semantic Web Challenge Proceedings)},
    Year = {2015},
    Editor = {Sean Bechhofer and Kostis Kyzirakos},
    Note = {Semantic Web Challenge, International Semantic Web Conference 2015},
    Abstract = {The curation of a knowledge base is a key task for ensuring the correctness and traceability of the knowledge provided in the said knowledge. This task is often carried out manually by human curators, who attempt to provide reliable facts and their respective sources in a three-step process: issuing appropriate keyword queries for the fact to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. However, this process is very time-consuming, mainly due to the human curators having to scrutinize the web pages retrieved by search engines. This demo paper demonstrate the RESTful implementation for DeFacto (Deep Fact Validation) ???????? an approach able to validate facts in RDF by finding trustworthy sources for them on the Web. DeFacto aims to support the validation of facts by supplying the user with (1) relevant excerpts of web pages as well as (2) useful additional information including (3) a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. We also provide an extension for finding similar resources obtained from the Linked Data, using the sameas.org service as backend. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate the time frame within which a fact was valid.},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2015/swc_defacto.pdf},
    Keywords = {defacto ngonga esteves aksw 2015 lehmann speck rene},
    Url = {http://jens-lehmann.org/files/2015/swc_defacto.pdf}
    }

  • E. Marx, T. Soru, D. Esteves, A. Ngonga Ngomo, and J. Lehmann, “An Open Question Answering Framework,” in The 14th International Semantic Web Conference, Posters & Demonstrations Track, 2015.
    [BibTeX]
    @InProceedings{openqa2015,
    Title = {An {O}pen {Q}uestion {A}nswering {F}ramework},
    Author = {Edgard Marx and Tommaso Soru and Diego Esteves and Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann},
    Booktitle = {The 14th International Semantic Web Conference, Posters \& Demonstrations Track},
    Year = {2015},
    Keywords = {SIMBA group_aksw marx ngonga smart lehmann openqa esteves mole soru 2015},
    Owner = {marx}
    }

  • D. Gerber, D. Esteves, J. Lehmann, L. Bühmann, R. Usbeck, A. Ngonga Ngomo, and R. Speck, “DeFacto – Temporal and Multilingual Deep Fact Validation,” Web Semantics: Science, Services and Agents on the World Wide Web, 2015.
    [BibTeX] [Abstract] [Download PDF]
    One of the main tasks when creating and maintaining knowledge bases is to validate facts and provide sources for them in order to ensure correctness and traceability of the provided knowledge. So far, this task is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents.In this article, we present DeFacto (Deep Fact Validation) – an algorithm able to validate facts by finding trustworthy sources for them on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of web pages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate in which time frame a fact was valid. Given that the automatic evaluation of facts has not been paid much attention to so far, generic benchmarks for evaluating these frameworks were not previously available. We thus also present a generic evaluation framework for fact checking and make it publicly available.

    @Article{gerber2015,
    Title = {De{F}acto - {T}emporal and {M}ultilingual {D}eep {F}act {V}alidation},
    Author = {Daniel Gerber and Diego Esteves and Jens Lehmann and Lorenz B{\"u}hmann and Ricardo Usbeck and Axel-Cyrille {Ngonga Ngomo} and Ren{\'e} Speck},
    Journal = {Web Semantics: Science, Services and Agents on the World Wide Web},
    Year = {2015},
    Abstract = {One of the main tasks when creating and maintaining knowledge bases is to validate facts and provide sources for them in order to ensure correctness and traceability of the provided knowledge. So far, this task is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents.In this article, we present DeFacto (Deep Fact Validation) - an algorithm able to validate facts by finding trustworthy sources for them on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of web pages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate in which time frame a fact was valid. Given that the automatic evaluation of facts has not been paid much attention to so far, generic benchmarks for evaluating these frameworks were not previously available. We thus also present a generic evaluation framework for fact checking and make it publicly available.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2015/JWS_DeFacto/public.pdf},
    Keywords = {2015 group_aksw simba diesel defacto lehmann esteves gerber usbeck speck ngonga geoknow buehmann},
    Url = {http://svn.aksw.org/papers/2015/JWS_DeFacto/public.pdf}
    }

  • D. Esteves, D. Moussallem, C. B. Neto, T. Soru, R. Usbeck, M. Ackermann, and J. Lehmann, “MEX Vocabulary: A Lightweight Interchange Format for Machine Learning Experiments,” in 11th International Conference on Semantic Systems (SEMANTiCS 2015), 15-17 September 2015, Vienna, Austria, 2015.
    [BibTeX] [Abstract] [Download PDF]
    Over the last decades many machine learning experiments have been published, giving benefit to the scientific progress. In order to compare machine-learning experiment results with each other and collaborate positively, they need to be performed thoroughly on the same computing environment, using the same sample datasets and algorithm configurations. Besides this, practical experience shows that scientists and engineers tend to have large output data in their experiments, which is both difficult to analyze and archive properly without provenance metadata. However, the Linked Data community still misses a light-weight specification for interchanging machine-learning metadata over different architectures to achieve a higher level of interoperability. In this paper, we address this gap by presenting a novel vocabulary dubbed MEX. We show that MEX provides a prompt method to describe experiments with a special focus on data provenance and fulfills the requirements for a long-term maintenance.

    @InProceedings{estevesMEX2015,
    Title = {{MEX} {V}ocabulary: {A} {L}ightweight {I}nterchange {F}ormat for {M}achine {L}earning {E}xperiments},
    Author = {Diego Esteves and Diego Moussallem and Ciro Baron Neto and Tommaso Soru and Ricardo Usbeck and Markus Ackermann and Jens Lehmann},
    Booktitle = {11th International Conference on Semantic Systems (SEMANTiCS 2015), 15-17 September 2015, Vienna, Austria},
    Year = {2015},
    Abstract = {Over the last decades many machine learning experiments have been published, giving benefit to the scientific progress. In order to compare machine-learning experiment results with each other and collaborate positively, they need to be performed thoroughly on the same computing environment, using the same sample datasets and algorithm configurations. Besides this, practical experience shows that scientists and engineers tend to have large output data in their experiments, which is both difficult to analyze and archive properly without provenance metadata. However, the Linked Data community still misses a light-weight specification for interchanging machine-learning metadata over different architectures to achieve a higher level of interoperability. In this paper, we address this gap by presenting a novel vocabulary dubbed MEX. We show that MEX provides a prompt method to describe experiments with a special focus on data provenance and fulfills the requirements for a long-term maintenance.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2015/SEMANTICS_MEX/public.pdf},
    Keywords = {mex simba 2015 sys:relevantFor:infai sys:relevantFor:bis aligned esteves baron usbeck group_aksw lehmann mole soru neto ackermann mack moussallem MOLE aligned-project},
    Url = {http://svn.aksw.org/papers/2015/SEMANTICS_MEX/public.pdf}
    }

  • D. Esteves, D. Moussallem, C. B. Neto, J. Lehmann, M. C. Cavalcanti, and J. C. Duarte, “Interoperable Machine Learning Metadata using MEX.,” in 14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Posters & Demos), 2015.
    [BibTeX] [Download PDF]
    @InProceedings{estevesMNLCD15,
    Title = {Interoperable {M}achine {L}earning {M}etadata using {MEX}.},
    Author = {Diego Esteves and Diego Moussallem and Ciro Baron Neto and Jens Lehmann and Maria Claudia Cavalcanti and Julio Cesar Duarte},
    Booktitle = {14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Posters \& Demos)},
    Year = {2015},
    Editor = {Serena Villata and Jeff Z. Pan and Mauro Dragoni},
    Publisher = {CEUR-WS.org},
    Series = {CEUR Workshop Proceedings},
    Volume = {1486},
    Bdsk-url-1 = {http://ceur-ws.org/Vol-1486/paper_102.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/291927b04e3cd969e894a6c93fd05af57/dblp},
    Crossref = {conf/semweb/2015p},
    Keywords = {mex esteves aksw dblp 2015 baron neto lehmann moussallem},
    Timestamp = {2015-12-24T12:18:02.000+0100},
    Url = {http://dblp.uni-trier.de/db/conf/semweb/iswc2015p.html#EstevesMNLCD15}
    }

2014

  • D. Esteves, “Predição de Tendência de Ativos em Séries Financeiras Utilizando Algoritmos de Aprendizado de Máquina,” Master Thesis, 2014.
    [BibTeX] [Download PDF]
    @MastersThesis{esteves2014,
    Title = {Predi\c{c}\~{a}o de Tend\^{e}ncia de Ativos em S\'{e}ries Financeiras Utilizando Algoritmos de Aprendizado de M\'{a}quina},
    Author = {Diego Esteves},
    School = {Military Institute of Engineering - Brazilian Army},
    Year = {2014},
    Month = {6},
    Bdsk-url-1 = {http://www.comp.ime.eb.br/pos/conteudo/publicacoes/detalhe-dissertacoes.html?q=2014&z=7},
    Keywords = {2014 esteves stock market assets time series sda},
    Url = {http://www.comp.ime.eb.br/pos/images/repositorio-dissertacoes/2014-Diego_Esteves.pdf}
    }

  • D. Esteves and J. C. Duarte, “Prediction of Assets Behavior in Financial Series using Machine Learning Algorithms,” International Journal of Advanced Research in Artificial Intelligence, vol. 2, iss. 11, 2014.
    [BibTeX]
    @article{esteves2013prediction,
    title={{P}rediction of {A}ssets {B}ehavior in {F}inancial {S}eries using {M}achine {L}earning {A}lgorithms},
    author={Diego Esteves and Julio Cesar Duarte},
    journal={International Journal of Advanced Research in Artificial Intelligence},
    Keywords = {2014 esteves stock market assets time series},
    volume={2},
    number={11},
    year={2014}
    }