Seminar: Human activity analysis, Selen Pehlivan, TED University [09.06.2016]
Selen Pehlivan, TED University
Title: Human activity analysis
Date: June 9, 2016 Thursday
In this talk, I will summarize our work on human activity analysis. First, focusing on multiple camera views, I will describe our method for fusion of frame judgments. We have shown that when there are enough overlapping views to generate a volumetric reconstruction, our recognition performance is comparable with that produced by volumetric reconstructions. Then, I will shortly introduce our recent work on weakly-supervised spatial-temporal action localization.
In the second part of the talk, I will present our work on fine-grained categorization. We introduce a unified approach for classification and keypoint localization, targeting subcategories with wide pose variations and high intra-class similarities. We apply our method to identifying instances of birds.
Finally, I will present our recent effort on action understanding to investigate the representational properties of the Action Observation Network using fMRI and modeling. In our study, we used powerful computational tools from computer vision as well as attribute-based semantic models to represent the videos consisting of natural actions, and linked those models to the brain responses (fMRI) using representational similarity analysis.
Selen Pehlivan holds a B.S. (Bilkent University, 2004) and an M.S. (Koç University, 2006) in Computer Engineering. Between 2004 and 2006, she was a member of the 3DTV EU Project. She got her Ph.D. degree (Bilkent University, 2012) also in Computer Engineering. From August 2009 to February 2011, she was a visiting scholar in Department of Computer Science at University of Illinois at Urbana-Champaign (UIUC). During her graduate studies, she worked as a research and teaching assistant. Before joining TED University, she was a postdoctoral research associate in Computer Vision Group at University of Central Florida (UCF).
Dr. Pehlivan is working on computer vision and machine learning with a primary focus on human activity understanding with video interpretation and object recognition.
Workshop: "Teoriden Pratiğe Yazılım Teknolojileri ve Süreçleri" [11.05.2016]
09:00-10:00 Açılış (Ömer Gökçeli)
10:00-10:30 Proje Yönetimi (Onur Koçoğlu)
10:45-11:15 İş Analizi (Songül Nişancı)
11:15-11:45 Java Yazılım Geliştirme (Salim Şahin)
12:00-12:30 .Net Yazılım Geliştirme
Workshop: Graduate Research Workshop and Exhibition of Senior Projects [09.05.2016]
Graduate Research Workshop and Exhibition of Senior Projects
Computer Engineering Conference Hall, Beytepe Campus
Date: May 9, 2016 Monday
09:00 - 15:30
For more information about the workshop program click here
Seminar: Boosting Efficiency in Large-Scale Search Engines [15.3.2016]
This talk focuses on various mechanisms that aim to improve the efficiency of search on a large-scale distributed architecture. I will start with a quick overview of the core components of a search engine and discuss why efficiency does matter. Next, I will cover a variety of issues and methods regarding caching and pruning, which are two key mechanisms for fast query processing in practical search engines.
Ismail Sengor Altingovde
Date: March 15, 2016 Tuesday, 11am, seminar room
Ismail Sengor Altingovde is currently an assistant professor in the Computer Engineering Department of Middle East Technical University (METU) at Ankara, Turkey. He has received his B.S., M.S. and Ph.D. degrees, all in Computer Science, from Bilkent University (Turkey) in 1999, 2001 and 2009, respectively. Before joining METU, he has worked as a postdoctoral researcher at Bilkent University and L3S Research Center in Hannover, Germany. He is one of the recipients of Yahoo! Faculty Research and Engagement Program (FREP) award in 2013.
Seminar: Achieving High Performance in Mobile-Cloud Computing with Self-Evaluating, Self-Protecting Agents
The proliferation of cloud computing resources in recent years offers a way for mobile devices with limited resources to achieve computationally intensive tasks in real-time. The mobile-cloud computing paradigm, which involves collaboration of mobile and cloud resources to accomplish such tasks, is expected to become increasingly popular in mobile application development. While mobile-cloud computing is promising to overcome the computational limitations of mobile devices, the lack of frameworks compatible with standard technologies makes it harder to adopt dynamic mobile-cloud computing at large. Furthermore, offloading computation to the cloud entails security risks associated with handing sensitive data and code over to an untrusted platform. Security models for mobile-cloud computing are not at a mature state and mostly focus only privacy, ignoring the aspect of integrity, which is essential to trust the results generated. Perfect security is hard to achieve in real-time mobile-cloud computing due to the extra computational overhead introduced by complex security mechanisms.
In this talk, I will discuss how mobile agents can be utilized as the key constructs of a secure framework to achieve low response time and energy consumption in mobile-cloud computing. First, I will present a dynamic computation offloading approach for context-aware mobile-cloud computing, based on JADE mobile agents. This approach does not impose any requirements on the cloud platform other than providing isolated execution containers, and it alleviates the management burden of offloaded code by the mobile platform using autonomous agent-based application partitions. We investigate the effects of different runtime environment conditions on the performance of the agent-based framework, and present a simple and low-overhead dynamic makespan estimation model that can be integrated into agents to enhance them with self-performance evaluation capability. In the second part of the talk, I will discuss a dynamic tamper-resistance approach for protecting mobile computation offloaded to the cloud, by augmenting mobile agents with self-protection capability through integration of software guards into their code. The tamper-resistance mechanism achieves low execution time overhead and is capable of detecting both load-time and runtime modifications to agent code, enabling judgment of trust in the computation results.
Date: March 18, 2016 Friday, 10am, seminar room
Pelin Angın is a Postdoctoral Research Associate at the Department of Computer Science at Purdue University. She received her B.S. degree in Computer Engineering at Bilkent University in 2007 and her PhD degree in Computer Science at Purdue University in 2013. Her research interests lie in the fields of high-performance mobile-cloud computing, distributed systems security and cloud-based assistive technologies. Her work has been published in several international journals and conference proceedings among which are IEEE Transactions on Dependable and Secure Computing, Elsevier Journal of Computer and Network Applications and IEEE International Conference on Cloud Computing.
Seminar: Action Recognition and Prediction with Applications to Daily Living [29.11.2015]
With the recent technological developments, egocentric cameras have become a part of our lives. They are designed as tiny cameras that can be worn without disturbing the person. In order to investigate the possible information gain from egocentric cameras, we used a multiple camera setting, where both an egocentric and multiple static cameras exist. Our research showed that when fused correctly, using the information from different types of cameras increases the recognition accuracy of actions. The model we proposed for this task is also suitable for other multi-modal settings. To prove the generality of the proposed model we also tested it on a setting with multiple static cameras and showed state-of-the-art results. Our model learns the importance of each camera in recognizing the actions, and it can also be used to direct the scenes automatically. We created examples of automatically directed scenes to show the concept.
We also addressed the problem of improving people's lives in a preventive way using egocentric cameras. In our work preventive refers to the general notion of reminders that can possibly prevent people from making mistakes that can cause problems. For example, when people are leaving a room while the stove is on, they might be reminded to turn the stove off. We proposed a notification decision mechanism that reasons about interdependencies between actions, checks at every time step whether there is a missing action that should be completed before the ongoing one ends, calculates a cost for missing it, and uses this cost to make a notification decision. Such a notification system requires recognizing the past actions and predicting the ongoing action while segmenting the activity observed so far. For this purpose, we proposed a model that uses standard features and accomplishes these three tasks successfully. We showed promising results on the extremely challenging task of issuing correct and timely reminders on a new egocentric dataset.
Bilge Soran, PhD
University of Washington
Bilge Soran is a Post-Doctoral researcher at the University of Washington, Mechanical Engineering Department working on healthcare applications of computer vision. She earned a Ph.D. degree from the University of Washington, Computer Science and Engineering Department in 2015, under the supervision of Linda Shapiro and Ali Farhadi. Her thesis topic was "Action Recognition and Prediction with Applications to Medical Diagnosis and Daily Living". She holds a M.Sc. degree from the same department, where the thesis topic was Parcellation of Human Inferior Parietal Lobule Based on Diffusion MRI. She earned another M.Sc. degree from the Gebze Institute of Technology, in 2007, where the thesis topic was Event Driven Molecular Dynamics Simulation. Before that, she obtained her B.Sc. degree from the Middle East Technical University, in 2002. She worked in The Scientific and Technological Research Council of Turkey between 2003-2009, as a researcher and senior researcher on avionics and simulation systems. Before that she had 2 years of industry experience.
Seminar: Action Perception in the Human Brain [11.11.2015]
Successfully perceiving and recognizing the actions of others is of utmost importance for the survival of many species. For humans, action perception is considered to support important higher order social skills, such as communication, intention understanding and empathy, some of which may be uniquely human. Over the last two decades, neurophysiological and neuroimaging studies in primates have identified a network of brain regions in occipito-temporal, parietal and premotor cortex that are associated with perception of actions, also known as the Action Observation Network. Despite growing body of literature, the functional properties and connectivity patterns of this network remain largely unknown.
One of the goals of my research is to address these general questions about functional properties and connectivity patterns with a specific focus on whether this system shows specificity for biological agents. To this end, we collaborated with a robotics lab, and manipulated the humanlikeness of agents that perform recognizable actions by varying visual appearance and movement kinematics. We then used a range of measurement modalities including cortical EEG oscillations, event-related brain potentials (ERPs), and functional magnetic resonance imaging (fMRI) together with a range of analytical techniques including pattern classification, representational similarity analysis (RSA), and dynamical causal modeling (DCM) to study the functional properties, temporal dynamics, and connectivity patterns of the Action Observation Network.
While our findings shed light whether the human brain shows specificity for biological agents, the interdisciplinary work with robotics also allowed us to address questions regarding human factors in artificial agent design in social robotics and human-robot interaction such as uncanny valley, which is concerned with what kind of robots we should design so that humans can easily accept them as social partners.
Burcu Aysen Urgen received her PhD (2015) in Cognitive Science from University of California, San Diego (UCSD) under the supervision of Professor Ayse P. Saygin; her BS degree in Computer Engineering and Information Science from Bilkent University, and her MS degree in Cognitive Science from Middle East Technical University. She is currently a post-doc in University of Parma, Italy with Professor Guy Orban. Her primary research interest is the neural mechanisms underlying visual perception of actions in the human brain. In her PhD, she investigated whether the human brain shows specificity for biological agents using humanoid robots by collaborating with Professor Hiroshi Ishiguro's lab (Osaka University, Japan), neuroimaging methods including fMRI and EEG, and machine learning techniques. Her interdisciplinary work has been published and presented at international and interdisciplinary journals and conferences. She was also one of the three awardees of the Interdisciplinary Scholar Award by UCSD with her interdisciplinary work between cognitive neuroscience and human-robot interaction.
Burcu Aysen Urgen, PhD