New Publications | Number |
---|---|
Monographs and Edited Volumes | 1 |
PhD Theses | 0 |
Journal Articles | 10 |
Book Chapters | 3 |
Conference Publications | 6 |
Technical Reports | 1 |
White Papers | 0 |
Magazine Articles | 0 |
Working Papers | 0 |
Datasets | 0 |
Total New Publications | 21 |
Projects | |
New Projects | 1 |
Ongoing Projects | 1 |
Completed Projects | 0 |
Members | |
Faculty Members | 4 |
Senior Researchers | 4 |
Associate Researchers | 4 |
Researchers | 3 |
Total Members | 15 |
New Members | 2 |
PhDs | |
Ongoing PhDs | 6 |
Completed PhDs | 0 |
New Seminars | |
New Seminars | 6 |
Date: 07 March 2016
Presenter: Tushar Sharma
Abstract
Poor design quality and huge technical debt are common issues perceived in real-life software projects. Design smells are indicators of poor design quality and the volume of design smells found could be treated as the design debt of the software system. The existing smell detection tools focus largely on implementation smells and do not reveal a comprehensive set of smells that arise at design level. In this talk, I present Designite - a software design quality assessment tool. It supports comprehensive design smells detection and provides a detailed metrics analysis. Further, it offers various features to help identify issues contributing to design debt and improve the design quality of the analyzed software system.
Tushar will present a corresponding paper at BRIDGE: First International Workshop on Bringing Architecture Design Thinking into Developers' Daily Activities (Bridge'16), which is co-located with the 38th International Conference on Software Engineering, May 14 - 22, 2016.
Date: 30 June 2016
Presenter: Alexandros Kapravelos
Abstract
In this talk I’m going to present Hulk, a dynamic analysis system that detects malicious behavior in browser extensions by monitoring their execution and corresponding network activity. Hulk’s novelty derives from how it elicits malicious behavior in extensions with dynamic pages that adapt to an extension’s expectations in web page structure and content and by fuzzing extensions event handlers. The second part of the talk is going to be focused on a particular malicious activity deriving from browser extensions: ad injection. In our experiments we found that ad injection is affecting more than 5% of the daily unique IP addresses accessing Google, affecting this way tens of millions of users around the globe.
Alexandros Kapravelos is an Assistant Professor in the Department of Computer Science at North Carolina State University. His research interests lie in the area of computer security and he is particularly interested in browser security and building systems that solve security problems. In the past, he was the lead developer of Wepawet, a publicly available system that detects drive-by downloads with the use of an emulated browser, and Revolver, a system that detects evasive drive-by download attempts. He is currently interested in internet-wide attacks that compromise the users’ security, building scalable systems to protect users and improving privacy on the web.
Date: 20 July 2016
Presenter: Ioannis Kamitsos
Abstract
This talk will focus on a few recent research results on greening of the Information and Communication Technology. In the first part, I will demonstrate how resource pooling can be exploited to optimize the tradeoff between two intrinsically conflicting objectives in cloud computing – energy consumption and delay performance. I will then present how we extend our optimal energy conserving techniques to the case of DSL broadband access and show how our approach results in a more energy efficient and stable DSL operation compared to existing Broadband Forum power saving policies. Finally, I will demonstrate an analytic framework that helps data center operators minimize day-to-day operational costs (such as electricity and bandwidth costs), based on processing jobs characteristics, as well as optimally expand their data center network based on predictions of future operational costs.
Ioannis Kamitsos is a Senior Software Engineer with Bloomberg LP, New York. He received the Diploma degree in electrical and computer engineering from the National Technical University of Athens, Greece, in 2006, and the M.A. and Ph.D. degrees from Princeton University, Princeton, NJ, USA, in 2009 and 2012, respectively. During the summer of 2010, he interned at the Standards and RF Laboratory, Samsung Telecommunications America, Richardson, TX, USA. His research interests include optimization theory, optimal control, and machine learning/big data analytics.
Date: 12 October 2016
Presenter: Tushar Sharma
Abstract
Software change impact analysis (CIA) methods enable developers to understand potential impacts of a code change so that the change can be executed confidently without affecting reliability of the software. However, existing CIA approaches do not support CIA for all source code granularities. Additionally, they lack support for inter-granular change impact queries and hidden dependencies. In this presentation, I introduce Augur, an automated static code analysis-based CIA approach that addresses these shortcomings. Augur infers and maintains semantic and environment dependencies along with data and control dependencies between source code entities across granularities. Additionally, Augur uses Change Impact Query Language, a novel query language for impact analysis proposed in this paper, to support inter-granular CIA queries with batch querying feature.
Date: 09 November 2016
Presenter: Diomidis Spinellis
Abstract
Finding and fixing errors in computing systems is an important and difficult task. Often debugging consumes most of the time in a developer’s workday; obtaining the required experience can take a lifetime. The talk categorizes, explains, and illustrates methods, strategies, techniques, and tools that can be used to pinpoint elusive and pestering bugs. The talk's aim is to provide an overview of the complete debugging landscape: from general principles, high level strategies, and behavioral traits to concrete techniques, handy tools, and nifty tricks.
Date: 14 December 2016
Presenter: Dimitris Mitropoulos
Abstract
Cross-Site Scripting (XSS) is one of the most common web application vulnerabilities. It is therefore sometimes referred to as the “buffer overflow of the web.” Drawing a parallel from the current state of practice in preventing unauthorized native code execution (the typical goal in a code injection), we propose a script whitelisting approach to tame JavaScript-driven XSS attacks. Our scheme involves a transparent script interception layer placed in the browser’s JavaScript engine. This layer is designed to detect every script that reaches the browser, from every possible route, and compare it to a list of valid scripts for the site or page being accessed; scripts not on the list are prevented from executing. To avoid the false positives caused by minor syntactic changes (e.g., due to dynamic code generation), our layer uses the concept of contextual fingerprints when comparing scripts.
Contextual fingerprints are identifiers that represent specific elements of a script and its execution context. Fingerprints can be easily enriched with new elements, if needed, to enhance the proposed method’s robustness. The list can be populated by the website’s administrators or a trusted third party. To verify our approach, we have developed a prototype and tested it successfully against an extensive array of attacks that were performed on more than 50 real-world vulnerable web applications. We measured the browsing performance overhead of the proposed solution on eight websites that make heavy use of JavaScript. Our mechanism imposed an average overhead of 11.1% on the execution time of the JavaScript engine. When measured as part of a full browsing session, and for all tested websites, the overhead introduced by our layer was less than 0.05%. When script elements are altered or new scripts are added on the server side, a new fingerprint generation phase is required. To examine the temporal aspect of contextual fingerprints, we performed a short-term and a long-term experiment based on the same websites. The former, showed that in a short period of time (10 days), for seven of eight websites, the majority of valid fingerprints stay the same (more than 92% on average). The latter, though, indicated that, in the long run, the number of fingerprints that do not change is reduced. Both experiments can be seen as one of the first attempts to study the feasibility of a whitelisting approach for the web.