Output list
Conference paper
Spatial diversity for HF remote sensors
Published 2017
2017 IEEE Conference on Antenna Measurements & Applications (CAMA)
IEEE Conference on Antenna Measurements & Applications (CAMA) 2017, 04/12/2017–06/12/2017, Tsukuba, Japan
A large volume of research has been undertaken to improve short range mesh based sensor networks. This paper considers a contrasting, but less established, research position that long range single hop HF transmissions suit many applications. HF transmissions are, however, not without challenges and this paper details an experiment to evaluate the viability of using spatial diversity to mitigate the intermittent and cyclical availability of HF circuits. Four transmitting nodes were placed across Australia and six days of packet transmissions at 14 MHz were analysed. The findings support the use of spatial diversity to maximise HF circuit availability and the presence of redundant circuits suggests the system has the capacity to accommodate antenna radiation patterns that present nulls in return for targeted gain.
Conference paper
An analysis of changing enterprise network traffic characteristics
Published 2017
2017 23rd Asia-Pacific Conference on Communications (APCC)
23rd Asia-Pacific Conference on Communications (APCC) 2017, 11/12/2017–13/12/2017, Perth, WA, Australia
Studies on the composition and nature of Internet protocols are crucial for continued research and innovation. This study used three different methods to investigate the presence and level of support for various Internet protocols. Internet traffic entering and exiting a university network was passively captured, anonymised and analysed to test protocol usage. Active tests probed the Internet's most popular websites and experiments on the default behaviour of popular client, server and mobile operating systems were performed to reconcile the findings of the passive data collection. These results are valuable to research areas, such as those using emulations and simulations, where realism is dependent on the accuracy of the underlying assumptions about Internet traffic. Prior work is leveraged to explore changes and protocol adoption trends. This study shows that the majority of Internet traffic is now encrypted. There has also been an increase in large UDP frames, which we attribute to the Google QUIC protocol. Support for TCP options such as Selective Acknowledgements (SACK) and Maximum Segment Size (MSS) can now be assumed. Explicit Congestion Notification (ECN) usage is still marginal, yet active measurement shows that many servers will support the protocol if requested. Recent IETF standards such as Multipath TCP and TCP Fast Open have small but measurable levels of adoption.
Conference paper
Measuring the reliability of 802.11 WiFi networks
Published 2015
2015 Internet Technologies and Applications (ITA)
Internet Technologies and Applications (ITA), 2015, 08/09/2015–11/09/2015, Wrexham, Wales
Over half of the transmission time in WiFi networks is dedicated to ensuring that errors are corrected or detected. Despite these mechanisms, many studies have concluded that frame error rates vary. An increased understanding of why frames are lost is a pragmatic approach to improving real world 802.11 throughput. The potential beneficiaries of this research, include rate control algorithms, Modulation and Coding Schemes, simulation models, frame size selection and 802.11 configuration guidelines. This paper presents a measurement study of the factors which correlate with packet loss in 802.11 WiFi. Both passive and active approaches were used to investigate how the frame size, modulation and coding scheme and airtime effect the loss rate. Overall, packet errors were high, but the size of frames were not a major determinant of the loss rate. The loss rate decreased with the airtime but at substantially lower rates than those suggested in simple packet error models. Future work will further try to isolate and investigate specific errors, such as head on collisions in the preamble.
Conference paper
Published 2014
2014 47th Hawaii International Conference on System Sciences, 3188 - 3197
47th Hawaii International Conference on System Sciences, HICSS 2014, 06/01/2014–09/01/2014, Waikoloa, HI, USA
Passwords have long been the preferred method of user authentication, yet poor password practices cause security issues. The study described in this paper investigates how user perceptions of passwords and security threats affect intended compliance with guidelines and explores how these perceptions might be altered in order to improve compliance. It tests a research model based on protection motivation theory [24]. Two groups of internet users were surveyed, one of which received password security information and an exercise to reinforce it. This study suggests effective ways that trainers or employers can improve compliance with password guidelines. In particular, training programs should aim to enhance IS security coping appraisal. The research model proposed in this study has also been shown to be a useful model for explaining IS security behavioral intentions.
Conference paper
An investigation of the impact of recertification requirements on recertification decisions
Published 2013
Proceedings of the 2013 annual conference on Computers and people research - SIGMIS-CPR '13, 79 - 86
Proceedings of the 2013 ACM Conference on Computers and People Research - SIGMIS-CPR 2013, 30/05/2013–01/06/2013, Cincinnati, OH; USA
Certification has become a popular adjunct to traditional means of acquiring information and communication technology (ICT) knowledge and skills and many employers specify a preference for those holding certifications. Many ICT certifications include a requirement to recertify regularly, but little is known about the impacts of recertification requirements on the intention to maintain certification. This research explores the factors that influence the recertification decision. The perspectives of both ICT students and ICT professionals were sought. Both students and ICT professionals were very positive about the benefits of certification and highlighted that intrinsic desire for improved knowledge and skill, as well as job related benefits, motivated them to obtain certification and maintain it. The ICT professionals also emphasized the importance of certification to their employers. ICT professionals had strong knowledge of the recertification requirements for the certifications they held. This was not, however, the case for the ICT students; many students had little knowledge of what recertification might entail. A key factor contributing to intention to recertify was flexibility to seek higher paying jobs. The cost of recertification was not found to be a major issue. Support from employers in providing time for obtaining recertification was considered important. Given the huge range of different certifications available, and the varying value of these to the holder at different points in their career, ICT professionals appeared to take a strategic approach to the decision to recertify. Not surprisingly, they considered, and selectively chose, those which are worth recertifying given their current position and career aspirations.
Conference paper
Published 2013
2013 36th International Conference on Telecommunications and Signal Processing (TSP), 282 - 289
36th International Conference on Telecommunications and Signal processing (TSP), 02/07/2013–04/07/2013, Rome, Italy
The Ethernet speed has increased to 40–100 Gbps since the release of IEEE P802.3ba. In this paper, we have extended the Intel's Large Receive Offload Linux software driver function to process the UDP/IP packets and to manage the out-of-order packets as well as design a scalable programmable Network Interface-based RISC core to support these functions in the Network Interface. The processing methodology and cycle processing of UDP packets inside the Network Interface are also discussed. Besides, the three-pipeline RISC's performance and data movements for high communication rates up to 100 Gbps have been measured too. The results presented herein show that an 800 MHz cost-effective embedded processor core can provide the required efficiency of the network interface to support a wide range of transmission line speeds, up to 100 Gbps. Furthermore, we have found several techniques that can contribute to packet processing and work with fewer headers and data transferring in a network interface.
Conference paper
Design a scalable ethernet Network Interface supporting the large receive offload
Published 2012
2012 International Symposium on Communications and Information Technologies (ISCIT)
International Symposium on Communications and Information Technologies (ISCIT) 2012, 02/10/2012–05/10/2012, Gold Coast, QLD
The Ethernet speed has increased to 40-100 Gbps since the release of IEEE P802.3ba. In this paper, we have enhanced the Intel's Large Receive Offload Linux software driver function to manage the out-of-order packets and designed a scalable Network Interface based RISC core to support this function in the Network Interface. The RISC's performance and data movements for high communication rates up to 100 Gbps have been measured, and the results presented herein show that a cost-effective embedded RISC core can provide the required efficiency of the network interface to support a wide range of transmission line speeds, up to 100 Gbps. Furthermore, we have found several techniques that can contribute to packet processing and work with fewer headers and less data copying in a host memory.
Conference paper
Large MTUs and internet performance
Published 2012
IEEE HPSR 2012 - 13th IEEE Conference on High Performance Switching and Routing, 24/06/2012–27/06/2012, Belgrade, Serbia
Ethernet data rates have increased many orders of magnitudes since standardisation in 1982. Despite these continual data rates increases, the 1500 byte Maximum Transmission Unit (MTU) of Ethernet remains unchanged. Experiments with varying latencies, loss rates and transaction lengths are performed to investigate the potential benefits of Jumboframes on the Internet. This study reveals that large MTUs offer throughputs much larger than a simplistic overhead analysis might suggest. The reasons for these higher throughputs are explored and discussed.
Conference paper
Cost effective RISC core supporting the large sending offload
Published 2012
2012 International Symposium on Communications and Information Technologies (ISCIT)
International Symposium on Communications and Information Technologies (ISCIT) 2012, 02/10/2012–05/10/2012, Gold Coast, QLD
The Ethernet speed has increased sending and receiving frames from 40 to 100 Gbps after the IEEE P802.3ba released. The industry and academia have focused scaling up the TCP/IP protocol processing for 40-100 Gbps. LSO is a de facto standard, which is offloaded to network interface for sending packets up to 10 Gbps. It not clears whether a network interface can support such function for new 40-100 Gbps. The widely use of the hardware-based NIC such as the use of a fully customized logic based network interface can be due to the following reasons; Still it is not clear whether the General Purpose Processor (GPP) can provide the processing required for high-speed line beyond the 10 Gbps. Also, the limit of the GPP's clock in supporting the processing of network interfaces. However, using a RISC core engine for offloading the LSO function can deliver some important features to network interfaces design, such as simplicity, scalability, shorter developing cycle time. In this paper, we have investigated using a specialized RISC core to process the LSO functions for TCP/IP and UDP/IP for high-speed communications rate up to 100 Gbps. To achieve this, we have enhanced the LSO algorithm to scale it to 100 Gbps. A fast DMA is used to support transferring data in the network interface. The LSO processing methodology on the network has presented. In addition, the RISC's performance and data movements for high communication rate up to 100 Gbps have been measured. A 148 MHz RISC core can support the sending-side processing for up to 100 Gbps transmission speed for the TCP/IP and UDP/IP protocol when the MTU is applied (1500 bytes). A DMA with 3759 MHz is required to eliminate the idle cycles while transferring data over the 64-bit local bus.
Conference paper
An Evaluation of TCP and UDP Protocols Processing Required for Network Interface Design at 100 Gbps
Published 2011
2011 IEEE International Conference on High Performance Computing and Communications
13th International Conference on High Performance Computing and Communications (HPCC), 02/09/2011–04/09/2011, Banff, AB
Today major challenges are faced by server platforms while performing TCP/IP or UDP/IP protocol processing. For instance, the speed of networks now exceeds the gigabit per sec Gbps, the design and implementations of high-performance Network Interfaces (NI) have become very challenging. There are different possible design approaches to implement high-speed NI. However, using the General Purpose Processing (GPP) as a core engine to offload some, if not all, of the TCP/IP or UDP/IP protocol functions can deliver some important features to NIs, such as simplicity, scalability, shorter developing cycle time and reduced costs. Still it is not clear whether the GPP can provide the processing required for high-speed line over 10 Gbps. Also, where is the limit of such GPP in supporting the processing of network interfaces? In this paper, we have measured the amount of processing required for Ethernet Network Interfaces (ENI) design supporting different transmission line speeds. A NI-programmable based RISC model has been designed to measure the processing required for the ENI|. The results have shown that a RISC core running at 240 MHz can be used as a processing core in high-speed ENI. Such core can support a wide range of transmission line speeds, up to 100 Gbps. Also, we have discussed some of the design issues that are related to RISC core based NI and the data movement type.