34 Visitas |
0 Candidatos |
Descripción del puesto:
What The Role Is
Project title: Evaluation of AI Test tools
The successful candidate will assess and compare leading AI testing platforms, determining their effectiveness, efficiency, and suitability for various AI development and deployment scenarios.
What You Will Be Working On
* Identify and select AI testing tools and platforms for evaluation. Examples include Adversarial robustness toolkit, PYRIT, tools from NIST etc.
* Develop evaluation criteria: Functionality, usability, performance, AI support, reporting, security.
* Testing methodology: Standardised tests, hands-on evaluation, comparative analysis.
* Use case analysis: Assess across AI applications and organisation types.
* Deliverables: Comparison report, recommendations.
What We Are Looking For
* Able to commit minimally 6 months
* Degree/Diploma in Computer Science/Business Analytics/Computer Engineering
* AI/ML expertise: Strong understanding of AI and machine learning concepts, models, and frameworks.
* Data analysis: Ability to analyse and interpret complex data sets
Origen: | Web de la compañía |
Publicado: | 24 Ene 2025 (comprobado el 14 Mar 2025) |
Tipo de oferta: | Prácticas |
Idiomas: | Inglés |