Research
Software Systems
The University of New South Wales

SSRG Honours Thesis Projects

Introduction

The thesis topics listed here are available to strong undergraduate students. They are mostly associated with research projects and generally quite challenging; many topics have the potential to lead to a publication, and in average we get about one paper a year from the work of one (or more) undergraduate thesis students. Students who are not aiming for excellence are in the wrong place here.

Note that the below list is constantly updated, new topics are added as we identify them as work on various research projects proceeds. Topics marked NEW are recent additions.

UNSW students can access all of our recent student theses.

Undergraduate Thesis Topics in Software Engineering and Cloud Computing

Undergraduate Thesis Topics in Operating Systems and Formal Methods


Undergraduate Thesis Topics in Software Engineering and Cloud Computing

Software engineering/cloud projects can be applied by students from Usyd, UTS, UNSW and ANU. Please email the supervisors for details.

Topics supervised by Liming Zhu (UNSW official list)

  • Dependable Auditing on Operations of in-Cloud Applications NEW
    Despite the tremendous potential of cloud computing, organisations that deploy their applications in the cloud may have concerns related to loss of control and dependability issues (e.g. security, privacy and availability). Operations of in-cloud applications are usually a mix of (semi-) automated tasks by both in-house administrators and cloud infrastructure providers reacting to rapidly changing environment.
    It is often difficult to establish assurance cases and track accountabilities of these operations in three cases 1) for compliance testing with respect to legal requirements such as Basel III 2) forensic operations when there has been a secuirty breach, and 3) when the applications encounter problems and it is necessary to get compensation from the cloud vendor..
    This project will investigate the both the technical mechanisms and legal issues in auditing operations of in-cloud applications. Specifically, the student will examine the current logging facilities related to typical operations of in-cloud applications and cloud infrastructure vendors's comliance process and evidence requirements. The student will then propose improvements in logging and auditing so better assurances and accountability could be established.
    References: email liming.zhu@nicta.com.au
    Supervisor: Len Bass (Len.Bass@nicta.com.au) & Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: System administrators skills preferred. Business and legal background is a plus.
  • Availability Analysis for Applications in Public Cloud NEW
    The cloud is a disruptive technology and is quickly being adopted. Putting applications in the cloud will introduce uncertainties for operations that have traditionally been under the direct control of an enterprise. Enterprises will be dependent on cloud providers and will need to use indirect means to understand and guarantee their quality goals such as performance and availability.
    In this project, you will get exposures to Amazon’s public cloud services and working with system engineers from Yuruware on their real world products and problems regarding achieving high availability through analysing deployment architectures and performing measurement. This project is suitable for individual or group.
    References: http://www.ssrg.nicta.com.au/projects/cloud/managing-qos.pml, http://www.yuruware.com/
    Supervisor: Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: Programming skills required. System administration skill is a plus.
  • Big Data Platform NEW
    Heavy use of computing is producing a large amount of valuable data everyday. There have been advances (e.g. MapReduce) at the infrastructure level for processing large amount of data. However, the challenges remains at the upper layer on connecting business data silos, accommodating entrenched but efficient domain-specific analysis tools, making analysis results usable in real-time to business operations, all in end-user friendly ways.
    The aim of this project is to connect a set of existing open-source tools through programmable web-interfaces to tackle the above challenges. NICTA's Software Systems Research Group (SSRG) has developed a web-based tool mashup technology to make this task easier. Work in this project would include understanding big data tools and NICTA's mashup tool, writing web-layer wrappers and programmable interfaces for existing tools and build a demonstration system.
    The key novelty is its ability to advertise the capabilities of existing tools and connect them through the basic Web. The outcome would be a proof of concept and demonstration system for web-coordinated analysis tools and data sources. This project is suitable for individual or group.
    References: Liming Zhu, Len Bass and Xiwei Xu, Data management requirements for a knowledge discovery platform, Architectures and Platforms for Knowledge Discovery from Data, Helsinki, Finland, pp. 4, August, 2012.
    Supervisor: Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: Programming skills required.
  • Understanding and Improving Operational Processes in Large-scale Distributed Systems NEW
    Most failures of modern large-scale distributed systems happen during sporadic operational processes (such as upgrade, backup, recovery and configuration changes).
    The overall goal of the project is to understand the characteristics of these complex operational processes so that we can improve the overall dependability of the systems especially during sporadic operational activities. These operational processes are always a combination of automated processes and human-intensive processes which require different types of resources (software artifacts, computation power and humans) to carry out. These resources show a wide range of different characteristics in terms of their error-proneness, undoability, availability over time and dependency and etc.
    These put significant challenges in choosing and scheduling these resources appropriately and their impact on overall system dependability. The project will expose students to a set of real world operational processes for large-scale distributed systems. The project involves the modelling these processes in (semi-) formal process languages and investigation of analysis methods (and tools) to determine the impact of an operational processes on system dependability.
    References: email limingz@cse.unsw.edu.au
    Supervisor: Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: Modelling experiences and strong communications skills required.
  • Fault Tolerance Platforms for Large-Scale Distributed Applications NEW
    Fault tolerance is a major concern of modern large-scale distributed applications. However, many such applications are running on top of commodity computing infrastructures (such as public cloud) which provide little fault-tolerance mechanisms and very limited controls over the underlying infrastructure. This requires the applications to deal with the faults and uncertainties by themselves, which is difficult and cumbersome.
    The purpose of this project is develop new fault-tolerance platforms on top of commodity computing infrastructure for large-scale distributed applications. One example effort towards this direction is the open source projects from Netflix (http://netflix.github.com/). In this project, students will be exposed to real world cloud computing platforms and build new fault-tolerance platforms on top of them.You will be working with senior researchers and other students in a friendly environment.
    References: Netflix technical blog and email limingz@cse.unsw.edu.au
    Supervisor: Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: Strong programming skills (e.g. Java) is required
  • Application Design and Measurement in Cloud Platforms NEW
    Designing and deploying an application in Cloud platforms (e.g. Amazon AWS, Rackspace...) is not easy especially dealing with failures and recovery. There are two aspects of the project. One is concerned with recovery-based design, which explores the different mechanisms and patterns for designing a recovery-friendly application from the very beginning. The second is concerned with the measurement of recovery time and state under different types of failures, configurations and designs.
    Students will be exposed to the lastest cloud computing platforms and technologies, for example, directly deploying in Amazon cloud platform.
    References: email limingz@cse.unsw.edu.au
    Supervisor: Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: Strong system administration and operation skills are required.
  • Tablet-based Control Interface for Distributed Systems Deployment NEW
    Modern large-scale distributed systems require sophisticated deployment and constant changes of deployment reacting to external workload, events and failures. This project is about developing a game-like control interface for system deployment. It works like the game Galcon (see references) on Tablets.
    A set of "planets" represent the data centers and the "swarms of battle ships" represent the computing resources you need to relocate from one data center to another due to resource exhaustion on the source data center or other reasons.
    There might be underlying re-deployment policies and auto-migration tools to help but it also allows the system operators to redeploy using the touch interface. This project will allow you to play with Tablet applications, game designs and expose to distributed systems concept.
    References: email limingz@cse.unsw.edu.au, http://www.youtube.com/watch?v=r-z-Pd9RcGM, https://play.google.com/store/apps/details?id=com.galcon.igalcon
    Supervisor: Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: Interests in Tablet-based application/game designs.
  • Cloud-in-Cloud: Software Defined Data Center for the Next Generation of Cloud Computing NEW
    This project will explore the design of future "software defined data centers" inside public cloud. It will use light virtual machine - LXC (http://lxc.sourceforge.net/) and network virtualization (i.e. software defined virtual networks) to define new data centers optimized for agility and highly shared resource environments.
    Students will be exposed to real world cloud computing platforms, setup and test new software-defined cloud on top of them. Students will be working with senior researchers and other students in a friendly environment.
    References: email limingz@cse.unsw.edu.au
    Supervisor: Liming Zhu (Liming.Zhu@nicta.com.au)
    Requirements: Strong system administration and operation skills are required.

Topics supervised by Ingo Weber (UNSW official list)

  • Topic Theme: Process Mining for Cloud Operation NEW
    With developments such as virtualization and cloud computing, system operation (such as installation, deployment, upgrading) has become a significantly more complex task: an operator might be responsible for thousands of machines, which are built and connected in ever more complex ways. Therefore it is important to support operators to make sure that, e.g., an upgrade process is executing correctly and has the desired result.
    Our work thus is concerned with (i) discovering how processes are executed for log files, and (ii) making sure a running process corresponds to the correct execution. Initial works of ours have been published - see below.
    In the context of this work, there are numerous open topics for future research:
    • Predicting success likelihood. From partial executions of a process, e.g. when done manually by an operator, we want to be able to tell the operator how likely he is to achieve his goal. This may help to prevent hours of unnecessary work.
    • Multi-instantiation of sub-processes. Certain cloud operation processes have sub-processes that are executed multiple times in parallel, where beforehand it is unknown how many such parallel executions will be required. Previous techniques for process mining do not deal with such cases.
    • Mining error handling procedures. Our current efforts in process mining focus on detecting the successful executions of a process. One missing aspect is error handling: if something goes wrong, and automatic error handling is invoked, when do we have to start worrying, and when is everything in order?
    • Automatic abstraction. One issue in discovering a process model from logs is the level of abstraction: usually, the logs are a lot more fine-grained and detailed than the level of abstraction we want in our process models. The abstraction is currently done in a largely manual fashion. When trying to abstract automatically, several hard research challenges arise: which events belong together? And when is a model appropriate, e.g., easy to understand for a human?
    Each of these topics can be addressed as an undergraduate / honors thesis, a research-linked project, or an internship. For a Master.s or Ph.D. thesis, multiple of the above topics can be considered, or one of them can serve as a starting point from which the direction will be developed over the course of the degree programme.
    References: Detecting Cloud Provisioning Errors Using an Annotated Process Model
    Supervisors: Ingo Weber (Ingo.Weber@nicta.com.au) and Len Bass (Len.Bass@nicta.com.au)
    Requirements: Java or another equivalent language. The student should be able to rapidly understand and manipulate new technologies
  • Topic Theme: Supporting Undo in API-controlled System NEW
    With the advent of cloud computing and related developments, more and more capabilities become available as APIs. For instance, instead of ordering a new server, waiting a few weeks, and installing the new server in one.s network, nowadays a few API calls suffice to get hold of a new server in a public cloud. While such powerful APIs can provide enormous increases in productivity and time-to-solution, they open new possibilities significant mishaps . e.g., if an administrator inadvertently deletes a virtual disk, all of the contained data is irrecoverably lost. In essence, many administrators operate without a safety net.
    In our work, we investigate the undoability of changes. On the one hand, we can check which operations can be undone, and under which circumstances. On the other hand, if undo is required, we can find a sequence of operations that brings a system back to a previously defined, desirable state: a checkpoint. Both techniques make use of Artificial Intelligence (AI) planning, and have been published - see below.
    In the context of this work, there are numerous open topics for future research:
    • Parallelization of undo plans. When the undo tool creates a sequence of operations to revert to a checkpoint, this plan should be examined on how it can be parallelized. Since the operations can fail, the plan can contain backup plans that specify what to do if an operation does not result in the desired outcome. Due to the presence of backup plans, parallelization becomes tricky.
    • Quality aspects of undo plans. Theoretically, infinitely many plans exist for many undo problems. The question is: which are the good plans? Quality aspects include overall duration, likelihood of success / robustness, and specific aspects like overall downtime of a specific service based on the resources manipulated by a plan. Depending on the user requirements, the planner should find a plan (or several meaningful alternatives) that suits the user's needs.
    • Automatic checkpointing. One problem of our undo approach is that it relies on the user setting a checkpoint before doing anything critical. If the user fails to do so, undo is not possible later on. The research question is: when should checkpoints be set, and how can we build / modify systems which automatically set checkpoints?
    • Scalability. Currently, our techniques scale to 50 or 100 resources, but not to thousands. Optimization and filtering strategies are required to focus the application of the techniques to only relevant aspects.
    • Extension into a general-purpose undo framework. At the moment, the undo work addresses specific resource types, mostly part of Amazon Web Services EC2. The existing work needs to be extended into a controller of a general-purpose undo framework, which can be easily extended through plugins. These plugins could provide integrated checkpointing / undo across various systems, e.g., internal configuration management of applications, snapshotting and restoring virtual harddrives, software-defined networks, etc.
    Each of these topics can be addressed as an undergraduate / honors thesis, a research-linked project, or an internship. For a Master.s or Ph.D. thesis, multiple of the above topics can be considered, or one of them can serve as a starting point from which the direction will be developed over the course of the degree programme.
    References: Automatic Undo for Cloud Management via AI Planning and Supporting Undoability in Systems Operations
    Supervisors: Ingo Weber (Ingo.Weber@nicta.com.au), Len Bass (Len.Bass@nicta.com.au) and Alan Fekete (Alan.Fekete@nicta.com.au)
    Requirements: Java or another equivalent language. The student should be able to rapidly understand and manipulate new technologies
  • Supporting Recovery During Deployment of New Application Versions in the CloudNEW
    Deploying a new version of an application into a cloud platform is a difficult undertaking, as the application may be executing on 1000s of servers, so the changeover between versions can't happen instantaneously, and the actions of a new version may conflict with actions of the old version. It is no surprise that errors occur during deployment and the old version must be restored.
    This project will design an approach that can automatically restore an application to the prior version. This will extend a recent result of Weber et al from NICTA which uses AI planning technology to restore a previous platform configuration.
    Weber's method is independent of the application being deployed, while the proposed project will be targeted to the case where the application is aware that a new version is being deployed, and so it can set up the necessary environment, and save appropriate state, to support a rollback.
    References: Automatic Undo for Cloud Management via AI Planning and Supporting Undoability in Systems Operations
    Supervisors: Ingo Weber (Ingo.Weber@nicta.com.au), Len Bass (Len.Bass@nicta.com.au) and Alan Fekete (Alan.Fekete@nicta.com.au)
    Requirements: Java or another equivalent language. The student should be able to rapidly understand and manipulate new technologies

Topics supervised by Adnene Guabtni

  • Determining Configuration Errors on Deployment in the Cloud NEW
    Many recent software disasters have been caused by errors occurring during deployment of a new version of the software. The Knight Capital Trading Company, for example, suffered a $440 million loss as a result of a deployment error. One common source of deployment errors is the mis-setting of configuration parameters.
    A typlical stack for an application in the cloud might be Joomla, PhP, MySQL, and the Apache Server. Each of these elements may have 50 or more configuration parameters. An ongoing project at NICTA involves monitoring the activities of applications in the cloud. This project aims to extend this to monitor the connection between configuration change and system behaviour.
    To begin, the project will work with the existing monitoring software and add an element that a) recognizes modifications to a configuration file, a) monitors parameters of interest, e.g. CPU usage, network usage, transaction rate, and c) raises an alarm when the deviation of these parameters from prior to the configuration change to after the configuration change is significant.
    As evaluation, we will then look for examples where the tool actually detects flaws. This would involve finding a change that was incorrect in some open source bugzilla repository, replicate the change and show that the tool actually detects a problem.
    References: An empirical study on configuration errors in commercial and open source systems
    Supervisor: , Adnene Guabtni (Adnene.Guabtni@nicta.com.au) Len Bass (Len.Bass@nicta.com.au) and Alan Fekete (Alan.Fekete@nicta.com.au)
    Requirements: Java or another equivalent language. The student should be able to rapidly understand and manipulate new technologies

Topics supervised by Vincent Gramoli

  • Reconfiguration of Datacenters with SDN NEW
    Datacenter networks and services are at risk in the face of disasters. Existing fault-tolerant storage services cannot even achieve a nil recovery point objective (RPO) as client-generated data may get lost before the termination of their migration across geo-replicated data-centers. Software defined network (SDN) has proved instrumental in exploiting application-level information to optimise the routing of information.
    The goal of this project is to leverage Software Defined Edge (SDE) or the implementation of SDN at the network edge to achieve nil RPO. The key idea is to duplicate and redirect the storage requests to multiple servers to guarantee fault tolerance. By implementing this duplication and redirection at the network level instead of the traditional application level we expect to improve the performance and overall fault-tolerance.
    The project requires skills in python and Java as well as knowledge of distributed systems and network technologies.
    Research Environment: The successful applicant will work in a dynamic and highly innovative environment in the Software System Research Group at NICTA in close collaboration with internationally recognised researcher. The applicant will benefit from strong collaborations with NICTA’s industrial and academic partners. This project will be primarily based at the NICTA ATP laboratory and the candidate is expected to interact with other members of the team consisting of junior and senior research staff, and PhD students. The institution will provide IT support and resources.

Topics supervised by Sherry Xu

  • Automate Error Detection of Cloud Operation: Case study of Migration/Deployment in Chef/Puppet NEW
    Cloud is a popular platform, but once a task exceeds one machine, the consumer must be an operator, configuring etc. Chef/Puppet is a well-used framework/tool for executing system operations.
    This project will develop tools based on recent NICTA research, that allow automatic error detection during runtime of application migration between cloud providers.
    Deliverables: Chef scripts that do migration and detect errors as they occur. Extensions may respond effectively, diagnose root causes, etc.
    References: email xiwei.xu@nicta.com.au
    Supervisor: Xiwei (Sherry) Xu (xiwei.xu@nicta.com.au)
    Requirements: Good Java programming. Desirable is AWS experience or operational experience.
  • Collecting Cloud Environment Errors and Distribution: Case study of AWS/VMware NEW
    Designing and deploying an application in Cloud platforms (e.g. Amazon AWS, Rackspace...) is not easy especially dealing with failures and recovery due to the uncertainty of Cloud infrastructure. Understanding and modelling of uncertain cloud infrastructure heavily rely on investigating the documentation provided by Cloud provider and experiment.
    This project will provide students an opportunity to learn the hot Cloud platforms and technologies and and working with system engineers from Yuruware
    Deliverables: Automatic data collection of API calls and Errors
    References: email xiwei.xu@nicta.com.au
    Supervisor: Xiwei (Sherry) Xu (xiwei.xu@nicta.com.au)
    Requirements: Good Java programming.
  • Automatic Tracking Error Code of API NEW
    Deploying applications in cloud environments will introduce uncertainties for operations that have traditionally been under the direct control of an enterprise. Understanding and modelling of uncertain cloud infrastructure heavily rely on investigating the documentation provided by Cloud provider and doing experiment. The Cloud API is being updated once a while.
    In this project, the students will learn the hot Cloud platform (e.g. Amazon, AWS, Rackspace..) and technologies, get their hand dirty, and build a tool to automatically track the changing of API regarding to the error messages/codes and handling.
    Deliverables: A tool that automatically tracks the changing Cloud API References: email xiwei.xu@nicta.com.au
    Supervisor: Xiwei (Sherry) Xu (xiwei.xu@nicta.com.au) and Len Bass (Len.Bass@nicta.com.au)
    Requirements: Good Java programming.
  • Automatic Mining Error Handling Relevant Information from Technical Forum through Natural Language Processing NEW
    Deploying applications in cloud environments will introduce uncertainties for operations that have traditionally been under the direct control of an enterprise. Understanding and modelling of uncertain cloud infrastructure heavily rely on investigating the documentation provided by Cloud provider, browsing technical forum, and doing experiment.
    In this project, the students will learn the hot Cloud platform (e.g. Amazon, AWS, Rackspace..) and technologies, get their hand dirty, and build a tool to automatically mine the error handling relevant information from a technical forum through natural language processing technique.
    Deliverables: A tool that automatically mines the error handling relevant information from technical forum through natural language processing technique References: email xiwei.xu@nicta.com.au
    Supervisor: Xiwei (Sherry) Xu (xiwei.xu@nicta.com.au) and Len Bass (Len.Bass@nicta.com.au)
    Requirements: Good Java programming.

Undergraduate Thesis Topics in Operating Systems and Formal Methods

We are generally looking for honours candidates, or students with outstanding performance in operating systems. Specifically we guarantee a thesis topic to any student who has obtained a HD grade in UNSW's Operating Systems or Advanced Operating Systems course, no matter what their other grades are!


Present topics supervised by Gernot Heiser (official list)

  • 3628: Message-passing vs migrating threads 

    Message-passing and migrating threads are two basic ways of implementing cross-domain communication (IPC). seL4, like all previous L4 kernels, uses the former. The kernel of the Composite OS, designed for similar application domains, uses the latter.

    This thesis is to examine and evaluate the Composite model and compare it to seL4, with the aim of understanding the main trade-offs and performance limitations, as well as the implications for resource management.

    Novelty and Contribution: evaluation of microkernel communication models in the context of a real-time capable system.

  • 3627: Secure network OS for SDN 

    Present "network OSes" for SDN controllers are applications that run on top of a standard OS, such as Linux. They also provide little or no security against buggy or malicious control apps. 

    The recent Rosemary work proposes a micro-kernelish NOS that runs on top of Linux. A sensible approach would be a minimal seL4-based system that runs natively, and uses seL4's protection and IPC mechanisms. This should be able to achieve better protection as well as better performance than Rosemary.

    This project is to design such a system, implement a prototype and evaluate it. Results are likely publishable.

  • 3587: Interrupt-Related Covert Channels on seL4 

    seL4 is the world's only general-purpose kernel with a proof of confidentiality that applies to its binary implementation. This proof provides very strong guarantees that the kernel enforces data confidentiality --- i.e. prevents information leakage that would violate the system's access control policy --- with some side conditions. The side conditions include interrupts being disabled and that the proof does not cover covert timing channels.

    One such timing channel arises when interrupts are enabled: the arrival of interrupts causes the time-slice of the current thread to be extended, because interrupt-servicing pre-empts the currently running thread, allowing it to indirectly observe interrupts that should otherwise remain secret. The goal of this project is to investigate mechanisms to mitigate and close such channels, while ensuring adequate interrupt response latencies.

    We expect a natural trade-off between channel bandwidth and interrupt-response latency, so further work in this project would investigate this trade-off by applying existing tools to measure the effectiveness of various mitigation strategies against benchmarked latencies. This project therefore involves a combination of kernel implementation, benchmarking and analysis.

    Novelty: If successful, the results of this project could be incorporated into future versions of seL4, and be applied to relax the assumptions of the seL4 confidentiality proof, increasing seL4's applicably for applications that demand both high-performance and high-assurance. seL4 would become the world's only kernel with a code-level confidentiality proof that holds when interrupts are enabled.

    Outcome: The design, implementation and empirical evaluation of kernel mechanisms to mitigate interrupt-related covert channels for seL4.

    References: Existing work on measuring timing channels;  seL4's confidentiality proof

  • 3586: Sloth vs eChronos 

    eChronos is an RTOS designed for deeply-embedded systems with no memory protection and single-mode execution, that is being developed and formally verified by NICTA. Sloth is a system for a similar application domain, which takes the unusual approach of leaving all scheduling to hardware, by running everything in an interrupt context. This limits the use of Sloth to processors where interrupts mode can be entered by software. This project is to evaluate and quantify the performance advantage of Sloth over eChronos.

    Novelty: Sloth is presently the world's fastest RTOS. eChronos, which has the advantage of formal verification and less dependence on hardware features, is a more traditionally-designed RTOS. This project will determine whether the performance advantage of Sloth is significant enough to justify the different (and more limiting) design.

    Outcome: A better understanding of RTOS design tradeoffs, eminently publishable results.

  • seL4 is designed for microprocessors with full virtual-memory support. However there are interesting use cases for a verified kernel on lower-end processors that only have a simple memory-protection unit (MPU). A particularly interesting case is the ARM Coretex M4, which has the same instruction-set architecture as the A9/A15 processors whcih seL4 presently supports, so a port to the M4 would be trivial except for memory management.

    seL4's approach to memory management should be easier to adapt to MPUs than earlier L4 kernels. This project is to investigate the design and implementation changes required for an MPU-based seL4, do the port and evaluate it.

    Novelty: NICTA has produced the first verified kernels for high-end microprocessors with full virtual memory (seL4) as well as for low-end single-mode microcontrollers (eChronos). The remaining middle ground are MPU-only processors, such as the M4. Success of this project will complete coverage.

    Outcome: seL4 version that runs and performs on M4 processors.

    -->
  • 3584: Protected-Mode eChronos NEW

    eChronos is an RTOS designed for deeply-embedded systems with no memory protection and single-mode execution, that is being developed and formally verified by NICTA. However there are interesting use cases for a verified kernel on mid-range processors that feature a simple memory-protection unit (MPU). A particularly interesting case is the ARM Coretex M4, which eChronos already supports, albeit without utilising the MPU. This project is to design a protected-mode version of eChronos, implement and evaluate it.

    Novelty: NICTA has produced the first verified kernels for high-end microprocessors with full virtual memory (seL4) as well as for low-end single-mode microcontrollers (eChronos). The remaining middle ground are MPU-only processors. Success of this project will complete coverage.

    Outcome: eChronos version that uses memory protection

  • 3582: Effective Cross-Kernel Communication 

    For reasons of scalability and verifiability, seL4 uses a multikernel approach where cores do not share an L2 cache. This implies that kernels on different cores do not share state, and communicate asynchronously via mailboxes.

    This project is to design, implement and evaluate a user-level communication package for threads running on different cores on top of the kernel's minimal mechanisms, and compare to other approaches, e.g., Linux IPC. This will, no doubt, require work on the seL4 mechanisms too. In fact, the project could be split between two students, one working inside the kernel and one at user level.

    NICTA's Trustworthy Systems team are world leaders in research for providing unprecedented security, safety, reliability and efficiency for software systems. Successes include deployment of the OKL4 microkernel in billions of devices, and the first formally verified OS kernel, seL4. Present activities include covert channel mitigation, mixed-criticality real-time systems, and automatic code-proof co-generation. We are building a complete seL4-based high-assurance system for autonomous helicopters, like Boeing's Unmanned Little Bird, in a project funded by US DoD. You will work with a unique combination of OS and formal methods experts, producing high-impact work with real-world applicability, driven by ambition and team spirit.

    Novelty: Multikernels are new, and other than the Barrelfish paper there is little evaluation, and what there is is on x86, with vastly different tradeoffs to our ARM platforms. Furthermore, seL4's idiosyncrasies mean that previous resuts are not necessarily transferrable. Given the significance of seL4, this work can lead to publishable results.

    Outcome: Understanding of how to do user-level communication in an seL4 multikernel; report describing design, implementation and evaluation.

    References: Baumann et al, The multikernel: A new OS}architecture for scalable multicore systems, SOSP'09

  • >3210: Making the TPM useful

    The Trusted Platform Module (TPM) specified by the Trusted Computing Group (TCG) and implemented on many PC platforms supports a secure boot and remote attestation (where an external agent can ascertain that the system is in a particular software configuration). However, the TCG approach has been a considered a failure for end-user devices, as it does nothing to ensure that the “trusted” software is trustworthy and does not support upgrading it when it has found to be vulnerable.

    The formally-verified seL4 microkernel presents an opportunity to make TPMs useful: seL4 is truly trustworthy, so attesting that it is running provides real assurance of trustworthiness. seL4 itself can then be used to instantiate a trusted software stack, and protect it from untrusted components, and it can be used to upgrade the trusted software securely. The Ironclad approach uses a similar idea, but requires verification of the full system, ruling out use of any untrusted compoenents. Instead, we use seL4's isolation properties for protecting critical components from untrusted ones.

    This thesis is to build a demonstrator of an seL4-based trustworthy system. This will require implementing TPM-facilitated secure boot of seL4 and some trusted base which can be remotely attested. If time allows, demonstrate secure software evolution.

    Novelty and Contribution: Such an approach to a practical TPM-based trusted system has not been demonstrated, and will constitute publishable research.

    References: Chris Hawblitzel, Jon Howell, Jacob R. Lorch, Arjun Narayan, Bryan Parno, Danfeng Zhang, Brian Zill: Ironclad Apps: End-to-End Security via Automated Full-System Verification, OSDI'14.

I will not take on students who have not shown a convincing performance in COMP3231 ``Operating Systems''. I normally expect students to have done COMP9242 ``Advanced Operating Systems'', although I make exceptions in special cases.

Most topics can lead to publications.

Present topics supervised by Ihor Kuz (official list)

Topics

  • 3287: Secure terminal on seL4 NEW
    seL4 is a formally verified microkernel for building secure systems. A key element of such systems is secure access to terminal I/O (i.e. the screen, keyboard, and mouse), which means that different applications can get user input and output without worrying that other malicious applications (such as a key logger) can interfere. Nitpicker is a secure display architecture developed at Technical University of Dresden. In this project implement a version of Nitpicker for seL4, and use it as the basis for building a secure windowed terminal. Evaluate the resulting system by analysing its functionality, performance, and security.
  • 3288: seL4 AUTOSAR NEW
    seL4 has been developed to be the basis for building secure systems, however, it can also be used as the basis for safety-critical systems, such as those used in cars. With seL4 in such systems, it becomes possible to provide guarantees about memory isolation properties, which is crucial for safety-critical systems. Besides memory isolation, seL4 also has known timing properties, making it possible to give timing guarantees, which is important for real-time systems such as those found in cars. The goal of this project is to investigate the role that seL4 can play in such systems by implementing the AUTOSAR automotive framework to use seL4 as the underlying OS.
  • 3289: Qubes on seL4
    Qubes is a new operating system architecture for developing secure desktop systems. It is based on isolation, running each application in a separate virtual machine so that they cannot maliciously interfere with each other. However, Qubes is based on Xen, which is a relatively heavyweight, and unsecure, hypervisor. Qubes would be much better if it ran on, and relied on, seL4 for its isolation. In this project you will implement a version of Qubes on seL4, and evaluate it by running various applications to analyse the security benefits that seL4 provides.
  • 1268: Shared resources in an microkernel-based OS
    One of the key services that an OS provides is a managing access to shared resources. For example, a file system manages access to shared disk space, a network stack manages access to a network device, a window system manages access to the display, etc. In a modular, microkernel-based OS, these shared resources are managed by user-level services. In this project you will investigate ways of modelling such shared resource managers within the CAmkES component framework on seL4 and develop a suitable model for building such services in a componentised environment. You will assess the suitability of this model by designing, implementing, and evaluating one or more such services (e.g., a file system, a network stack, etc.).
  • IK10: Click Modular Router on L4
    TAKEN

Related topics supervised by Gerwin Klein (official list)

Projects

  • GWK01: Formal Model of an ARM Processor in Isabelle/HOL
    Develop a specification of an ARM processor (e.g. Xscale) suitable for use in formal verification of programs. A similar such model for an MMU-less ARM6 core has been developed by Anthony Fox at Cambridge in the HOL4 system. This should be examined for its usability, and for what is missing with respect to a full model of an Xscale processor. If time allows, an instruction-set level simulator should be generated from the model. This project is an integral part of the formal verification of the L4 micro kernel at NICTA. It connects cutting edge OS research with real-world large-scale system verification. You will work with the developers of L4 and Isabelle in an international team of PhD students and researchers in NICTA's SSRG group.
  • GWK02: Verifying the core of standard C library in Isabelle/HOL
    You will work with a state-of-the-art interactive theorem prover (Isabelle/HOL) to formally verify the functional behaviour of a small number of basic C functions like memcpy, memset, etc. The verification of these functions is at the basis of any undertaking that wants to provide guarantees about programs implemented in C. This project is an integral and important part of the formal verification of the L4 micro kernel at NICTA. You will work with the developers of L4 and Isabelle in an international team of PhD students and researchers in NICTA's SSRG group.
  • GWK03: Formal Model of L4 IPC and/or Threads in Isabelle/HOL
    Develop a specification of a subsystem of the L4 microkernel in the theorem prover Isabelle/HOL. L4 provides three basic abstractions - address spaces, threads and IPC. An abstract model has been developed for address spaces and the virtual memory subsystem, the aim of this project is to provide a similar model for one or both of the remaining abstractions. In addition, an investigation into high-level properties of this model will be undertaken, together with the development of proofs that the models satisfy these properties. If time allows, the model will be refined towards the L4Ka::Pistachio implementation on ARM. This project is an integral part of the formal verification of the L4 micro kernel at NICTA. It connects cutting edge OS research with real-world large-scale system verification. You will work with the developers of L4 and Isabelle in an international team of PhD students and researchers in NICTA's SSRG group.

Related topics supervised by Kevin Elphinstone (official list)

Projects

  • 2981: Secure microkernel-based web server using Linux instances
    Our research group has developed a formally verified secure microkernel that supports virtualisation. We have a version of Linux that runs on top of this kernel. The goal of this project is to develop a secure web server platform consisting of a instance of Linux running in the DMZ and an instance of Linux running on the trusted network - all actually running on the same machine using the secure microkernel to separate them. This project has the chance to be deployed as a demonstrator for our groups web site.
  • KJE15: A Secure Bootstrapper for the seL4
    The seL4 microkernel is a high assurance microkernel capable of acting as a seperation kernel when it and the encompassing system is instantiated correctly. The goal of this thesis is to develop a simple component model that can specific an initial system state - i.e. the servers and applications that will run on the microkernel. THe component model is then used to generate the boot strapping code to instantiate the system with the specified seperation guarantees. The project may involve evaluating the existing CAMKES framework for the component model, and looking at formal models and guarantees for both the component model, and the generation of the boot strapper.
  • KJE16: Linux as a component.
    NICTA has various versions of Linux that run para-virtualised on various versions of micro-kernels developed here at NICTA. However, the connection between Linux and the platform is rather ad-hoc, which makes is difficult bring Linux into the principled componet framework (CAMKES) developed here at NICTA. This project would involve examining the interface between the micro-kernel and the support infrastructure to allow Linux to be just another component in the CAMKES framework.
  • KJE17: ARTEMIS robotic clarinet player
    NICTA is entering the ARTEMIS intrument playing robot competition. This project involves developing the system software side of the robot, with an eye to making it general enough to use it for future entries. It involves low-level embedded controller programming, Linux kernel programming, and application programming. A familiarity with music is also helpful.

Present topics supervised by Leonid Ryzhyk (official list)

  • 3221: Design and implementation of an algorithm for automatic device driver synthesis NEW
    Device-driver development is a notoriously difficult and error-prone task. An alternative approach to manually writing device drivers is to automatically synthesize them from a formal specification of the device and a specification of the interface between the driver and the OS. In this thesis project you will design, implement, and evaluate an algorithm for automatic driver synthesis. The main challenge involved in this project is dealing with state explosion that occurs when analysing realistic device specifications. In order to overcome this problem you will explore techniques such as compositional synthesis and abstraction refinement. This work will be carried out in close collaboration with other NICTA students and researchers working on driver synthesis.
  • 3222:Modelling of I/O devices for automatic device-driver synthesis
    Device-driver development is a notoriously difficult and error-prone task. An alternative approach to manually writing device drivers is to automatically synthesize them from a formal specification of the device and a specification of the interface between the driver and the OS. In this project you will develop specifications of several I/O devices for use in driver synthesis. Such a specification constitutes a model of device operation written in a high-level hardware description language (HDL) such as SystemVerilog or DML. You will then use these specifications to synthesise working drivers for the selected devices. In the course of this work you will identify limitations in the synthesis tool and will work with other students and researchers on improving the tool and the underlying algorithms.
  • 3071:Reliable Device Driver Framework for Linux
    As part of an effort to put an end to the numerous software failures caused by buggy device drivers, our research group is developing a new device driver architecture for Linux. This architecture eliminates certain types of bugs by design and makes writing correct drivers easier. In addition it facilitates automatic detection of driver bugs by model checking tools. In this project, you will develop Linux kernel components as part of our driver development framework and will implement and one or more drivers using this framework. You will also come up with a formal specification of the interface between the driver and the OS and will use a model checker to verify the your drivers comply with this protocol. The outcome of this work will be published in one of the top OS conferences and will be proposed for inclusion in the Linux kernel.

How to apply:

Contact the relevant supervisor.

Note for OS/FM related topics: We promise a thesis topic to every interested student who has obtained a HD grade in COMP3231/COMP9201 Operating Systems or COMP9242 Advanced Operating Systems. If necessary we will define additional topics to match demand.

We will not turn down any students doing exceptionally well in OS courses. However, this does not mean that an HD in OS or Advanced OS is a prerequisite for doing a thesis with me. Interested students with lower OS marks are welcome to talk to me if they feel they can convince me that they will be able to perform well in an OS thesis.

Keep in mind that these topics are all research issues and generally at the level of Honours Theses. They are not suitable for marginal students or students with a weak understanding of operating systems. We expect you to know your OS before you start.


Past thesis reports and DiSy thesis rules (internal access only)


Postgraduate thesis topics:

Undergraduate thesis topics are also suitable for coursework Master's projects. Same conditions apply: You must have a pretty good track record in OS courses for OS and FM related topics.

Information about research theses