SERL - Software Engineering Research Laboratory
Permanent link for this collection
The Software Engineering Research Lab (SERL) at AUT University undertakes world-class research directed at understanding and improving the practice of software professionals in their creation and preservation of software systems. We are interested in all models of software provision – bespoke development, package and component customisation, free/libre open source software (FLOSS) development, and delivery of software as a service (SaaS). The research we carry out may relate to just one or all of these models.
Browse
Browsing SERL - Software Engineering Research Laboratory by Author "Connor, AM"
Now showing 1 - 20 of 36
Results Per Page
Sort Options
- ItemA comparison of semi-deterministic and stochastic search techniques(Springer, 2000) Connor, AM; Shea, KThis paper presents an investigation of two search techniques, tabu search (TS) and simulated annealing (SA), to assess their relative merits when applied to engineering design optimisation. Design optimisation problems are generally characterised as having multi-modal search spaces and discontinuities making global optimisation techniques beneficial. Both techniques claim to be capable of locating globally optimum solutions on a range of problems but this capability is derived from different underlying philosophies. While tabu search uses a semi-deterministic approach to escape local optima, simulated annealing uses a complete stochastic approach. The performance of each technique is investigated using a structural optimisation problem. These performances are then compared to each other as well as a steepest descent (SD) method.
- ItemA contextual information retrieval framework(National Advisory Committee on Computing Qualifications (NACCQ), 2005) Limbu, D; Connor, AM; MacDonell, SGThe amount of information on the Internet is constantly growing and the challenge now is one of finding relevant information. Contextual information retrieval (CIR) is a critical technology for today's search engines to facilitate queries and return relevant information. Despite its importance, little progress has been made in CIR due to the difficulty of capturing and representing contextual information about users. Numerous CIR approaches exist today, but, to the best of our knowledge, none of them offer a similar service to the one proposed in this paper. This paper proposes an alternative framework for CIR from the World Wide Web (WWW). The framework aims to improve query results (or make search results more relevant) by constructing a contextual profile based on a user's behaviour, their preferences, and a shared knowledge base, and by using this information in the search engine framework to find and return relevant information.
- ItemA framework for contextual information retrieval from the WWW(International Society for Computers and Their Applications (ISCA), 2005) Limbu, DK; Connor, AM; MacDonell, SGSearch engines are the most commonly used type of tool for finding relevant information on the Internet. However, today’s search engines are far from perfect. Typical search queries are short, often one or two words, and can be ambiguous therefore returning inappropriate results. Contextual information retrieval (CIR) is a critical technique for these search engines to facilitate queries and return relevant information. Despite its importance, little progress has been made in CIR due to the difficulty of capturing and representing contextual information about users. Numerous contextual information retrieval approaches exist today, but to the best of our knowledge none of them offer a similar service to the one proposed in this paper. This paper proposes an alternative framework for contextual information retrieval from the WWW. The framework aims to improve query results (or make search results more relevant) by constructing a contextual profile based on a user’s behaviour, their preferences, and a shared knowledge base, and using this information in the search engine framework to find and return relevant information
- ItemA tabu search environment for engineering design optimisation(AUT University, 2004) Connor, AM; MacDonell, SG
- ItemA tabu search method for the optimisation of fluid power circuits(SAGE Publications Ltd., 1998) Connor, AM; Tilley, DGThis paper describes the development of an efficient algorithm for the optimization of fluid power circuits. The algorithm is based around the concepts of Tabu search, where different time-scale memory cycles are used as a metaheuristic to guide a hill climbing search method out of local optima and locate the globally optimum solution. Results are presented which illustrate the effectiveness of the method on mathematical test functions. In addition to these test functions, some results are presented for real problems in hydraulic circuit design by linking the method to the Bathfp dynamic simulation software. In one such example the solutions obtained are compared to those found using simple steady state calculations.
- ItemAn automatic architecture reconstruction and refactoring framework(Springer (Studies in Computational Intelligence v.377), 2012) Schmidt, F; MacDonell, SG; Connor, AMA variety of sources have noted that a substantial proportion of non trivial software systems fail due to unhindered architectural erosion. This design deterioration leads to low maintainability, poor testability and reduced development speed. The erosion of software systems is often caused by inadequate understanding, documentation and maintenance of the desired implementation architecture. If the desired architecture is lost or the deterioration is advanced, the reconstruction of the desired architecture and the realignment of this desired architecture with the physical architecture both require substantial manual analysis and implementation effort. This paper describes the initial development of a framework for automatic software architecture reconstruction and source code migration. This framework offers the potential to reconstruct the conceptual architecture of software systems and to automatically migrate the physical architecture of a software system toward a conceptual architecture model. The approach is implemented within a proof of concept prototype which is able to analyze java system and reconstruct a conceptual architecture for these systems as well as to refactor the system towards a conceptual architecture.
- ItemAn integrated tool set to support software engineering learning(Software Engineering Research Group (SERG), the University of Auckland, 2007) Philpott, A; Buchan, J; Connor, AMThis paper considers the possible benefits of an integrated Software Engineering tool set specifically tailored for novice developers, and reflects on the experience of having software engineering students produce various components of this tool set. Experiences with a single semester pilot are discussed and future directions for refining the model are presented.
- ItemAutonomous requirements specification processing using natural language processing(International Society for Computers and Their Applications (ISCA), 2005) MacDonell, SG; Min, K; Connor, AMWe describe our ongoing research that centres on the application of natural language processing (NLP) to software engineering and systems development activities. In particular, this paper addresses the use of NLP in the requirements analysis and systems design processes. We have developed a prototype toolset that can assist the systems analyst or software engineer to select and verify terms relevant to a project. In this paper we describe the processes employed by the system to extract and classify objects of interest from requirements documents. These processes are illustrated using a small example.
- ItemBridging the research-practice gap in requirements engineering(National Advisory Committee on Computing Qualifications (NACCQ), 2009) Pais, S; Talbot, A; Connor, AMThis paper examines the perceived research-practice gap in software requirements engineering, with a particular focus on requirements specification and modelling. Various contributions by researchers to write requirements specifications are reviewed and in addition practitioners viewpoints are also taken into consideration. On comparing the research and practice in this field, possible causes for the gap are identified. The barriers to adopt research contributions in practice are also reviewed. Finally recommendations to overcome this gap are made.
- ItemBridging the research-practice gap in requirements engineering through effective teaching and peer learning(IEEE, 2009) Connor, AM; Buchan, J; Petrova, KIn this paper, we introduce the concept of the research-practice gap as it is perceived in the field of software requirements engineering. An analysis of this gap has shown that two key causes for the research-practice gap are lack of effective communication and the relatively light coverage of requirements engineering material in University programmes. We discuss the design and delivery of a masters course in software requirements engineering (SRE) that is designed to overcome some of the issues that have caused the research-practice gap. By encouraging students to share their experiences in a peer learning environment, we aim to improve shared understanding between students (many of whom are current industry practitioners) and researchers (including academic staff members) to improve the potential for effective collaborations, whilst simultaneously developing the requirements engineering skill sets of the enrolled students. Feedback from students in the course is discussed and directions for the future development of the curriculum and learning strategies are given.
- ItemBuilding services integration: a technology transfer case study(National Advisory Committee on Computing Qualifications (NACCQ), 2007) Connor, AM; Siringoringo, WS; Clements, N; Alexander, NThis paper details the development of a relationship between Auckland University of Technology (AUT) and the Building Integration Software Company (bisco) and how projects have been initiated that add value to the bisco product range by conducting applied research utilising students from AUT. One specific project related to producing optimal layout designs is discussed.
- ItemContextual and concept-based interactive query expansion(National Advisory Committee on Computing Qualifications (NACCQ), 2006) Limbu, D; Pears, R; Connor, AM; MacDonell, SIn this paper, we present a novel approach for contextual and concept based query formulation in web-based information retrieval, which is an on-going PhD project being undertaken at the Software Engineering Research Lab (SERL) at Auckland University of Technology (AUT). Various query formulation approaches have been studied for a long time with varying degree of success. To the best of our knowledge none of the existing approaches offer a similar service to the one discussed in this paper. Our novel approach centres on the formulation of a high quality search query using a user’s contextual profile, a shared contextual knowledge based, lexical databases and domain-specific concepts. A user’s contextual profile is constructed by monitoring and capturing user’s implicit and explicit data. A shared contextual knowledge based is built by consolidating various users’ contextual profiles. A machine learning technique is employed to learn user’s specific information needs and support the iterative development of a search query by suggesting alternative terms/ concepts for query formulation. Early results indicate that the system has the potential to not only aid in the formulation of high quality search queries but also contribute towards the long term goal of intelligent contextual information retrieval from the WWW.
- ItemContextual relevance feedback in web information retrieval(Association for Computing Machinery (ACM), 2006) Limbu, DK; Connor, AM; Pears, R; MacDonell, SIn this paper, we present an alternative approach to the problem of contextual relevance feedback in web-based information retrieval. Our approach utilises a rich contextual model that exploits a user's implicit and explicit data. Each user's implicit data are gathered from their Internet search histories on their local machine. The user's explicit data are captured from a lexical database, a shared contextual knowledge base and domain-specific concepts using data mining techniques and a relevance feedback approach. This data is later used by our approach to modify queries to more accurately reflect the user's interests as well as to continually build the user's contextual profile and a shared contextual knowledge base. Finally, the approach retrieves personalised or contextual search results from the search engine using the modified/expanded query. Preliminary experiments indicate that our approach has the potential to not only aid in the contextual relevance feedback but also contribute towards the long term goal of intelligent relevance feedback in web-based information retrieval.
- ItemDesign and control of cross coupled mechanisms driven by AC brushless servomotors(Professional Engineering Publishing, 1998) Connor, AMThis paper presents an overview of a design methodology for the optimal synthesis of hybrid mechanisms. Hybrid mechanisms have been defined as multi-degree of freedom systems where the input motions are supplied by different motor types. In this work a five bar mechanism is designed for a given task under the constraint that one input axis rotates with constant velocity whilst the other input can exhibit any motion requirement. A machine of this type is classified as being cross-coupled due to the mechanical linkage between the input axes. Cross-coupling implies that the input motion on one axis effects the position of the other input axis. This can lead to either opposition to, or accentuation of the control system input. Such a system as this is difficult to control due to the compensation for this on each axis leading to further disturbance. Results are presented for a real machine operating in this way and the actual output of the machine is compared to the desired input of the machine.
- ItemImproving web information retrieval using shared contexts(Information Sciences and Computer Engineering, 2010) Connor, AM; Limbu, DK; MacDonell, SG; Pears, RThe effective utilisation of a user’s context in improving the performance of web search engines is a subject of intense research interest. In particular, much attention has been directed to the enhancement of queries and the provision of more relevant information by taking user context into account. Progress in this field has been limited to date, however, due to ongoing challenges in capturing and representing contextual information. We describe here the development and evaluation of a web-based contextual information retrieval that addresses some of these challenges and makes progress in defining the information required to create contextual profiles. Our system collects and leverages implicit and explicit user data to modify queries with the aim of more accurately reflecting the user’s interests. This data is maintained dynamically in each user’s contextual profile and utilised to improve the quality of information found during web searches. Where enabled, this data also contributes to the development of a shared contextual knowledge base that can also be used to augment queries. This shared contextual knowledge base is a key aspect of this research. The system has been tested in an observational study that has considered its ability to improve the user’s web search experience. This paper presents experimental data to provide evidence of the system’s performance, demonstrating that the shared contextual knowledge base extends the functionality associated with the individual contextual profile.
- ItemImproving web search using contextual retrieval(IEEE Computer Society Press, 2009) Limbu, DK; Connor, AM; Pears, R; MacDonell, SGContextual retrieval is a critical technique for todaypsilas search engines in terms of facilitating queries and returning relevant information. This paper reports on the development and evaluation of a system designed to tackle some of the challenges associated with contextual information retrieval from the World Wide Web (WWW). The developed system has been designed with a view to capturing both implicit and explicit user data which is used to develop a personal contextual profile. Such profiles can be shared across multiple users to create a shared contextual knowledge base. These are used to refine search queries and improve both the search results for a user as well as their search experience. An empirical study has been undertaken to evaluate the system against a number of hypotheses. In this paper, results related to one are presented that support the claim that users can find information more readily using the contextual search system.
- ItemMemory models for improving tabu search with real continuous variables(IEEE, 2006) Connor, AMThis paper proposes that current memory models in use for tabu search algorithms are at best evolving, as opposed to adaptive, and that improvements can be made by considering the nature of human memory. By introducing new memory structures, the search method can learn about the solution space in which it is operating. The memory model is based on the transfer of events from episodic memory into generalised rules stored in semantic memory. By adopting this model, the algorithm can intelligently explore the solution space in response to what has been learned to date and continuously update the stored knowledge.
- ItemMinimum cost polygon overlay with rectangular shape stock panels(Taylor & Francis, 2008) Siringoringo, WS; Connor, AM; Clements, N; Alexander, NMinimum Cost Polygon Overlay (MCPO) is a unique two-dimensional optimization problem that involves the task of covering a polygon shaped area with a series of rectangular shaped panels. This has a number of applications in the construction industry. This work examines the MCPO problem in order to construct a model that captures essential parameters of the problem to be solved automatically using numerical optimization algorithms. Three algorithms have been implemented of the actual optimization task: the greedy search, the Monte Carlo (MC) method, and the Genetic Algorithm (GA). Results are presented to show the relative effectiveness of the algorithms. This is followed by critical analysis of various findings of this research.
- ItemMining developer communication streams(Academy & Industry Research Collaboration Center (AIRCC) Publishing Corporation, 2014-03-03) Connor, AM; Finlay, J.A.; Pears, RThis paper explores the concepts of modelling a software development project as a process that results in the creation of a continuous stream of data. In terms of the Jazz repository used in this research, one aspect of that stream of data would be developer communication. Such data can be used to create an evolving social network characterized by a range of metrics. This paper presents the application of data stream mining techniques to identify the most useful metrics for predicting build outcomes. Results are presented from applying the Hoeffding Tree classification method used in conjunction with the Adaptive Sliding Window (ADWIN) method for detecting concept drift. The results indicate that only a small number of the available metrics considered have any significance for predicting the outcome of a build.
- ItemMining software metrics from the jazz repository(ARPN Journal of Systems and Software, 2011-09-19) Connor, AMThis paper describes the extraction of source code metrics from the Jazz repository and the systematic application of data mining techniques to identify the most useful of those metrics for predicting the success or failure of an attempt to construct a working instance of the software product. Results are presented from a study using the J48 classification method used in conjunction with a number of attribute selection strategies applied to a set of source code metrics. These strategies involve the investigation of differing slices of code from the version control system and the cross-dataset classification of the various significant metrics in an attempt to work around the multicollinearity implicit in the available data. The results indicate that only a relatively small number of the available software metrics that have been considered have any significance for predicting the outcome of a build. These significant metrics are outlined and implication of the results discussed, particularly the relative difficulty of being able to predict failed build attempts.