Events 

Seminars 

Prof. I. G. Sarma Memorial Lecture Series 

Prof. Priti Shankar Memorial Seminar Series 

Multimedia Archive 



UPCOMING SEMINARS 

Series: Department Seminar Title: Using Inherent Structures to design Lean 2layer RBMs  Speaker: Mr. Abhishek Bansal
Software Engineer
IBM India Research Lab, Delhi  Date and Time: Monday, June 25, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Understanding the representational power of Restricted Boltzmann Machines (RBMs) with multiple layers is an illunderstood problem and remains an important area of active research. Motivated from the approach of Inherent Structure formalism (Stillinger & Weber, 1982), extensively used in analysing Spin Glasses, we propose a novel measure called Inherent Structure Capacity (ISC), which characterizes the capacity of a fixed architecture RBM by the expected number of modes of distributions emanating from the RBM with parameters drawn from a prior distribution. Though ISC is intractable, we provide a tractable upper bound. The bound suggests that for a single layer RBM ISC approaches a finite constant and to increase it one needs to add additional layers. Furthermore, we introduce Lean RBMs, which are multilayer RBMs where each layer can have atmost O(n) units with the number of visible units being n. We show that for every single layer RBM with sigma(n^(2+r)); r>=0, hidden units there exists a twolayered lean RBM with theta(n2) parameters with the same ISC, establishing that 2 layer RBMs can achieve the same representational power as singlelayer RBMs but using far fewer number of parameters. To the best of our knowledge, this is the first result which quantitatively establishes the need for layering.
Speaker Bio: The speaker is a Software Engineer at IBM India Research Lab, Delhi. He completed his BTech from IIT Delhi in Manufacturing Sc & Engg and ME in CSE from IISc Bengaluru.
*This is Practice talk for ICML 2018*
Host Faculty: Prof. Chiranjib Bhattacharyya
 Series: CSA Distinguished Lecture Title: Lecture 4: Generative Models in Deep learning  Speaker: Professor Sargur N. Srihari
SUNY Distinguished Professor
Department of Computer Science and Engineering
University at Buffalo
The State University of New York
USA  Date and Time: Monday, June 25, 2018, 10:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Although early approaches to statistical pattern recognition emphasized generative models (the “Bayes Classifier”, Naiive Bayes Classifier, Hidden Markov Models) they proved to be less useful than discriminative models (logistic regression, nearestneighbor, conditional random fields, neural networks). Probabilistic graphical models made a comeback for generative models by overcoming some computational issues (fewer parameters), but others remained (inference is #P complete). Deep Learning has created a resurgence of interest in generative models. Starting with the Restricted Boltzmann machine, spectacular results have been obtained with Variational Autoencoders and Generative Adversarial Networks. The talk will outline this progression of models and indicate some of our results with deep generative models.
Speaker Bio: Sargur Srihari is a computer scientist whose work is on automated systems for pattern recognition and machine learning. The principal impact of his work has been on statistical methods, on the analysis and recognition of handwriting and in computational methods for forensic impression evidence.
Sargur Srihari is currently a SUNY Distinguished Professor in the Department of Computer Science and Engineering at the University at Buffalo, The State University of New York where he also holds adjunct professorships in the Department of Biostatistics and in the Department of Electrical Engineering. He teaches courses in machine learning and probabilistic graphical models.
With support from the United States Postal Service for over 20 years, he founded CEDAR, the Center of Excellence for Document Analysis and Recognition, in 1991, which had a major impact. His research led to: (i) the first largescale handwritten address interpretation systems in the world (deployed by the IRS and USPS), (ii) postDaubert court acceptance of handwriting testimony based on handwriting individuality asessment, (iii) a software system in use by forensic document examiners worldwide (iv) statistical characterization of uncertainty in impression evidence, and (v) first characterization of document image analysis as a subfield of pattern recognition.
Srihari's honors include: Outstanding Acheivements Award of IAPR/ICDAR in Beijing China in 2011, Distinguished alumnus of the Ohio State University College of Engineering in 1999. Fellow of the International Association for Pattern Recognition in 1996, Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in
Host Faculty: Prof. M. Narasimha Murty
 Series: CSA Faculty Colloquium Title: Going beyond worst case analysis in the design of
approximation algorithms  Speaker: Dr. Anand Louis
Assistant Professor
Dept. of CSA  Date and Time: Friday, June 22, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract In the study of algorithms for combinatorial optimization problems, often the best known algorithms provide a somewhat underwhelming performance guarantee, however simple heuristics perform remarkably well in practice. A possible explanation for this phenomenon could be that for many problems, the instances arising in practice tend to have some inherent structure that makes them "easier" than the "worst case instances". Many attempts have been made to understand the structural properties of these instances, and to use them in designing algorithms specifically for such instances, which could perform much better than algorithms for general instances.
I will talk about a few vignettes in this direction.
Speaker Bio: Anand is an Assistant Professor at CSA. He received his Ph.D. in 'Algorithms, Combinatorics, and Optimization' from Georgia Tech in 2014, and held a postdoctoral position at Princeton University before moving to IISc in September 2016. His research interests lie in the theory of algorithms and optimization.
Host Faculty: Prof. Sunil L Chandran & Prof. Shalabh Bhatnagar
 Series: M.Sc. (Engg) Colloquium Title: Handling Overloads with Social Consistency  Speaker: Ms. Priyanka Singla
M.Sc (Engg.) student
Dept. of CSA  Faculty Advisor: Prof. K Gopinath
 Date and Time: Friday, June 22, 2018, 11:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Cloud computing applications have dynamic workloads, and they often observe spikes in the incoming traffic which might result in system overloads. System overloads are generally handled by various load balancing techniques like replication and data partitioning. These techniques are effective when the incoming bursty traffic is dominated by reads and writes to partitionable data, but they become futile against bursts of writes to a single hot object. Further, the systems which use these load balancing techniques, to provide good performance, often adopt a variant of eventual consistency and do not provide strong guarantees to applications, and programmers. In this work, we propose a new client based consistency model, called social consistency, as a solution to this single object overload problem. Along with handling overloads, the proposed model also provides a stronger set of guarantees within subsets of nodes (socially related), and provides eventual consistency across different subsets. We argue that by using this approach, we can in practice ensure reasonably good consistency among the clients and a concomitant increase in performance.
We further describe the design of a prototype system, BLAST, which implements this model. It dynamically adjusts resource utilization in response to changes in the workload thus ensuring nearly constant latency, and throughput, which scales with the offered load. In particular, the workload spikes for a single hot object are handled by cloning the object and partitioning the clients according to their social connectivity, binding the partitions to different clones, where each partition has a unique view of the object. The clones and the client partitions are recombined when the spike subsides. We compare the performance of BLAST to Cassandra database system, and our experiments show that BLAST handles 1.6X (by performing one split) and 2.4X (by performing three splits) more workload. We also evaluate BLAST against another load balancing system and show that BLAST provides 37% better Quality of Experience (QoE) to the clients.
 Series: Department Seminar Title: Algorithms and Data Structures for Geometric Intersection Query Problems  Speaker: Dr. Rahul Saladi
Computer Science and Engineering
University of Illinois UrbanaChampaign  Date and Time: Thursday, June 21, 2018, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Geometric intersection queries (GIQ) a.k.a. rangesearching queries have been very well studied by the
computational geometry community and the database community. In a GIQ problem, the user is not interested
in the entire input geometric dataset, but only in a small subset of it and requests an informative summary of
that small subset of data. The goal is to organize the data into a data structure which occupies a
small amount of space and yet responds to any user query in realtime.
In this talk, I will try to give an overview of the contributions I have made along with my coauthors
on some of the GIQ problems. The focus will be on (a) topk queries which are very popular in the
database community, and (b) point location and rectangle stabbing queries which are fundamental
problems in the computational geometry community
Speaker Bio: Rahul Saladi got his Ph.D. from University of Minnesota. He obtained his bachelor's degree and master's degree in computer science from IIITHyderabad. Rahul has been awarded the Computer Science Fellowship in Fall 2011 and the Doctoral Dissertation Fellowship (DDF) for the 201415. At the Fast Forward Preview Session at ACM SIGSPATIAL GIS 2012 he was awarded the first prize for his presentation. He has published his work in top theoretical computer science venues (SODA, SoCG) and in top databases venues (PODS, IEEE TKDE).
Host Faculty: Dr. Anand Louis
 Series: M.Sc. (Engg) Thesis Defense Title: Checking observational purity of procedures  Speaker: Mr. Himanshu Arora
M.Sc. (Engg.) student
Dept. of CSA  Faculty Advisor: Prof. K V Raghavan
 Date and Time: Tuesday, June 19, 2018, 2:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract We provide two static analysis approaches (using theorem proving) that
check if a given (recursive) procedure behaves as if it were stateless,
even when it maintains state in global variables. In other words, we check
if the given procedure behaves like a mathematical function. In order to
eliminate the need for manual annotations, we make use of an invariant that
makes use of uninterpreted function symbols. This invariant captures the
set of reachable global states in all runs of the procedure, if the
procedure is observationally pure. If the procedure is not observationally
pure, this invariant has no semantics. Allowing function symbols makes it
easier to generate the invariant automatically. The two static analysis are
an existential checker and an impurity witness checker. The impurity
witness checker outputs a formula whose unsatisfiability implies that the
procedure is observationally pure. Whereas, the existential checker outputs
a formula that constrains the definition of the function that the given
procedure may implement. Satisfiability of the formula generated by the
existential checker implies that the given procedure is observationally
pure. The impurity witness approach works better (empirically) with SMT
solvers, whereas the existential approach is more precise on paper. We
illustrate our work on examples such as matrix chain multiplication.
Examples such as these are not addressable by related techniques in the
literature. The closest work to our is by Barnett et al.; this work cannot
handle procedures with self recursion. We prove both our approaches to be
sound. We have implemented the two static analyses using the Boogie
framework and the Z3 SMT solver, and have evaluated our implementation on a
number of examples.
 Series: CSA Distinguished Lecture Title: Lecture 3: Deep Learning Research Representation Learning  Speaker: Professor Sargur N. Srihari
SUNY Distinguished Professor
Department of Computer Science and Engineering
University at Buffalo
The State University of New York
USA  Date and Time: Monday, June 18, 2018, 10:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract The key advancement in artificial intelligence in the last few years has been in learning representations. We will begin with the importance of representation in information processing in general and the concept of autoencoders. We then proceed to how the notion of representation is useful in deep architecture design, how learning algorithms share statistical strength across different tasks and the reasons for the success of representation learning. In particular we discuss the success of transfer learning and the wordtovec method of natural language processing.
Speaker Bio: Sargur Srihari is a computer scientist whose work is on automated systems for pattern recognition and machine learning. The principal impact of his work has been on statistical methods, on the analysis and recognition of handwriting and in computational methods for forensic impression evidence.
Sargur Srihari is currently a SUNY Distinguished Professor in the Department of Computer Science and Engineering at the University at Buffalo, The State University of New York where he also holds adjunct professorships in the Department of Biostatistics and in the Department of Electrical Engineering. He teaches courses in machine learning and probabilistic graphical models.
With support from the United States Postal Service for over 20 years, he founded CEDAR, the Center of Excellence for Document Analysis and Recognition, in 1991, which had a major impact. His research led to: (i) the first largescale handwritten address interpretation systems in the world (deployed by the IRS and USPS), (ii) postDaubert court acceptance of handwriting testimony based on handwriting individuality asessment, (iii) a software system in use by forensic document examiners worldwide (iv) statistical characterization of uncertainty in impression evidence, and (v) first characterization of document image analysis as a subfield of pattern recognition.
Srihari's honors include: Outstanding Acheivements Award of IAPR/ICDAR in Beijing China in 2011, Distinguished alumnus of the Ohio State University College of Engineering in 1999. Fellow of the International Association for Pattern Recognition in 1996, Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in
Host Faculty: Prof. M. Narasimha Murty


PAST SEMINARS 

Series: Department Seminar Title: Polyhedral Autotransformation with No Integer Linear Programming  Speaker: Aravind Acharya
 Faculty Advisor: Uday Reddy
 Date and Time: Wednesday, June 13, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Stateoftheart algorithms used in automatic polyhedral transformation for
parallelization and locality optimization typically rely on Integer Linear
Programming (ILP). This poses a scalability issue when scaling to tens or
hundreds of statements, and may be disconcerting in production compiler
settings. In this work, we consider relaxing integrality in the ILP
formulation of the Pluto algorithm, a popular algorithm used to find good
affine transformations. We show that the rational solutions obtained from the
relaxed LP formulation can easily be scaled to valid integral ones to obtain
desired solutions, although with some caveats. We first present formal results
connecting the solution of the relaxed LP to the original Pluto ILP. We then
show that there are difficulties in realizing the above theoretical results in
practice, and propose an alternate approach to overcome those while still
leveraging linear programming. Our new approach obtains dramatic compiletime
speedups for a range of large benchmarks. While achieving these compiletime
improvements, we show that the performance of the transformed code is not
sacrificed. Our approach to automatic transformation provides a mean
compilation time improvement of 5.6x over stateoftheart on relevant
challenging benchmarks from the NAS PB, SPEC CPU 2006, and PolyBench
suites. We also came across situations where prior frameworks failed to find a
transformation in a reasonable amount of time, while our new approach did so
instantaneously.
This work will be presented at PLDI 2018 later this month.
Speaker Bio: Aravind Acharya is a PhD student working with Prof. Uday Reddy. His
research is focused on automated compiler technologies for high
performance computing in multicore and manycore architectures.
Host Faculty: Deepak D'Souza
 Series: Department Seminar Title: Software Reliability Engineering: Algorithms and Tools  Speaker: Lance Fiondella
 Date and Time: Tuesday, June 12, 2018, 2:30 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract While there are many software reliability models, there are relatively
few tools to automatically apply these models. Moreover, these tools
are over two decades old and are difficult or impossible to configure
on modern operating systems, even with a virtual machine. To overcome
this technology gap, we are developing an open source software
reliability tool for the software engineering community. A key
challenge posed by such a project is the stability of the underlying
model fitting algorithms, which must ensure that the parameter
estimates of a model are indeed those that best characterize the
data. If such model fitting is not achieved, users who lack knowledge
of the underlying mathematics may inadvertently use inaccurate
predictions. This is potentially dangerous if the model underestimates
important measures such as the number of faults remaining or
overestimates the mean time to failure (MTTF). To improve the
robustness of the model fitting process, we have developed expectation
maximization (EM) and expectation conditional maximization (ECM)
algorithms to compute the maximum likelihood estimates of
nonhomogeneous Poisson process (NHPP) software reliability models.
This talk will present an implicit ECM algorithm, which eliminates
computationally intensive integration from the update rules of the ECM
algorithm, thereby achieving a speedup of between 200 and 400 times
that of explicit ECM algorithms. The enhanced performance and
stability of these algorithms will ultimately benefit the software
engineering communities that use the open source software reliability
tool.
Speaker Bio: Lance Fiondella is an assistant professor in the Department of
Electrical & Computer Engineering at the University of Massachusetts
Dartmouth. He received his PhD (2012) in Computer Science &
Engineering from the University of Connecticut. Dr. Fiondella has
published over 110 peerreviewed journal articles and conference
papers. Eight of his conference papers have been recognized with
awards, including four with his students. His research has been funded
by the United States Department of Homeland Security, Army Research
Laboratory, Naval Air Systems Command, National Aeronautics and Space
Administration, and National Science Foundation, including a CAREER
award. He has supervised four Master's theses and is the doctoral
advisor of five PhD students.
Host Faculty: Deepak D'Souza
 Series: Department Seminar Title: Question Answering: The Shallow and the Deep  Speaker: Prof. Soumen Chakrabarti
 Date and Time: Monday, June 11, 2018, 4:00 PM
 Venue: CSA Multimedia Class (Room No. 252, First Floor)
Abstract Web search has come a long way from matching query words with document words. It is now mediated by knowledge graphs (KGs) such as Freebase, having hundreds of millions of entities belonging to tens of thousands of types, connected by billions of relations. Also essential is to annotate token spans in the Web corpus with canonical types (e.g. `scientist') and entities (e.g. `m.0jcx', Freebase's unique ID for Albert Einstein). Armed with suitable indexes and ranking functions, we can now search for ``scientists who played the violin'',
but only if the search engine understands that `scientists' is the target type, `violin' is a grounded entity, and `played' is the connecting relation. We will review recent dramatic improvements in the techniques search engines use to infer these diverse roles of query words, by jointly exploiting knowledge graphs and corpus annotations. The best techniques use neural networks to compare KG and corpus neighborhoods of candidate entities against possibly overlapping query segments. I will conclude with emerging work on more complex queries such as ``countries having more rivers than India'' or ``how old was Indira Gandhi when Rahul Gandhi was born''. Answering such queries involves breaking them down into subtasks and performing nontrivial reasoning on the subtask responses. Neural programmerinterpreters have been used with some initial success for such complex queries. Time permitting, we will review some of these methods.
Speaker Bio: Soumen Chakrabarti received his B.Tech in Computer Science from the Indian Institute of Technology, Kharagpur, in 1991 and his M.S. and Ph.D. in Computer Science from the University of California, Berkeley in 1992 and 1996. He was a Research Staff Member at IBM Almaden Research Center from 1996 to 1999. He joined Department of Computer Science and Engineering, IIT Bombay in 1999 as an Assistant Professor and now he is full Professor in the same Department. He has published in WWW, SIGIR, SIGKDD, SIGMOD, VLDB, ICDE, SODA, STOC, SPAA and other conferences as well as Scientific American, IEEE Computer, VLDB and other journals. He won the best paper award at WWW 1999. . His work on keyword search in databases got the 10year influential paper award at ICDE 2012. Recently he was awarded the prestigious ShantiSwarup Bhatnagar award for his contributions to Computer Science.
Host Faculty: Prof. Chiranjib Bhattachryya
 Series: Ph.D. Colloquium Title: Interplay of Incentives and Computation in Social Choice  Speaker: Rohit Vaish
PhD student
Dept. of CSA  Faculty Advisor: Dr. Siddharth Barman
 Date and Time: Monday, June 11, 2018, 3:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Social choice is the science of collective decisionmaking, spanning a wide variety of problems such as voting, matching, and fair division. Two key factors govern the success of any social procedure: (1) the availability of the right set of incentives to the participating agents, and (2) computational efficiency. Our work examines the interplay between incentives and computation in three fundamental problems in social choice: fair division, voting, and matching.
Our work in fair division provides a computationally efficient algorithm for dividing a set of discrete resources among a set of agents in a fair and economically efficient manner. Prior to our work, the only known way of achieving such an outcome was via solving a computationally intractable optimization problem. Thus, we use the ease of computation to make incentives possible.
In contrast to fair division, our work in voting uses the difficulty of computation to achieve the desired incentives. In particular, we study the problem of strategic voting in a general model of elections, and show that finding a manipulative vote can be computationally hard, thus ensuring truthful voter behavior.
Finally, our work in matching studies the classical problem of twosided stable matching. Here, we accept the fact that the agents will act strategically, and yet we are able to guarantee desirable social outcomes.
 Series: CSA Distinguished Lecture Title: Lecture 2: Deep Networks Modern Practice  Speaker: Professor Sargur N. Srihari
SUNY Distinguished Professor
Department of Computer Science and Engineering
University at Buffalo
The State University of New York
USA  Date and Time: Monday, June 11, 2018, 10:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Deep Networks are used today in many applications of artificial intelligence. This talk will provide an overview of different feedforward network architectures that have been used in practice, including convolutional neural networks and recurrent neural networks. We will also discuss the role of gradientbased learning, hidden units, computational graphs and backpropagation. Commonly used methods of regularization and optimization will be described. We conclude with an overview of practical methods covering performance metrics, baseline models, role of data, hyperparameters and debugging strategies.
Speaker Bio: Sargur Srihari is a computer scientist whose work is on automated systems for pattern recognition and machine learning. The principal impact of his work has been on statistical methods, on the analysis and recognition of handwriting and in computational methods for forensic impression evidence.
Sargur Srihari is currently a SUNY Distinguished Professor in the Department of Computer Science and Engineering at the University at Buffalo, The State University of New York where he also holds adjunct professorships in the Department of Biostatistics and in the Department of Electrical Engineering. He teaches courses in machine learning and probabilistic graphical models.
With support from the United States Postal Service for over 20 years, he founded CEDAR, the Center of Excellence for Document Analysis and Recognition, in 1991, which had a major impact. His research led to: (i) the first largescale handwritten address interpretation systems in the world (deployed by the IRS and USPS), (ii) postDaubert court acceptance of handwriting testimony based on handwriting individuality asessment, (iii) a software system in use by forensic document examiners worldwide (iv) statistical characterization of uncertainty in impression evidence, and (v) first characterization of document image analysis as a subfield of pattern recognition.
Srihari's honors include: Outstanding Acheivements Award of IAPR/ICDAR in Beijing China in 2011, Distinguished alumnus of the Ohio State University College of Engineering in 1999. Fellow of the International Association for Pattern Recognition in 1996, Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in
Host Faculty: Prof. M. Narasimha Murty
 Series: Research Student Colloquium Title: 1)Comparing Topological Structures for Visualization
2) Machine Learning for Software Engineering  Speaker: 1) Raghavendra G S, Ph.D student, Dept.of CSA
2) Rahul Gupta, Ph.D student, Dept. of CSA  Date and Time: Friday, June 08, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract 1). Advances in the data acquisition methods have led to abundance of high resolution data and the time varying nature has added to the complexity. Scientific data generated by both observations and simulations requiremethods to analyze, understand and gain useful insights about the underlying scientific processes. In this regard, visualization is a nobrainer since it is all but an extension of an innate human trait; we often use visual cues to understand things. Topology, as a data abstraction tool, helps us in decreasing the complexity both in analysis and storage. In the past few decades, this has led to a synthesis of the seemingly obscure mathematical area of topology and visualization resulting in more effective and efficient methods in understanding scientific data. In this talk, we will see how this has been achieved along with some of our recent work involving comparative analysis of topological data structures. We also demonstrate examples to highlight the utility of our method in understanding scientific data.
2). The application of machine learning to software and its development, is an emerging research area. Exploiting the "naturalness" of source code, machine learning has been successfully applied to a number of problems in software engineering research e.g. code completion, bug detection, and program repair to name a few. In the first half of this talk, I will give an overview of this area and briefly discuss some of these applications. In the second half of the talk, I will discuss our recent work which uses machine learning for fixing common C programming errors in student code.
Speaker Bio: 1).Raghavendra GS is a 2nd year PhD student in CSA. He is advised by Prof.Vijay Natarajan and is a part of Visualization and Graphics Lab (VGL). His research interests include Scientific Visualization, Computational Topology and its applications.
2).Rahul Gupta is a PhD student working in Software Engineering and Analysis lab, CSA. He is advised by Prof. Aditya Kanade and Prof. Shirish Shevade. His research interests lie broadly in developing machine learning techniques for solving problems in software engineering.
Host Faculty: Prof. Sunil L Chandran & Prof. Shalabh Bhatnagar
 Series: CSA Distinguished Lecture Title: Lecture 1: Deep Learning Overview  Speaker: Professor Sargur N. Srihari
SUNY Distinguished Professor
Department of Computer Science and Engineering
University at Buffalo
The State University of New York
USA  Date and Time: Monday, June 04, 2018, 10:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Artificial Intelligence (AI) methods of today are based on a methodology known as deep learning. The talk will describe how deep learning differs from previous AI methods, which were either based on modeling human expertise or used simple machine learning. Then we will described commonly used deep learning architectures for various application domains. Finally, we will provide a glimpse of several deep learning research endeavors, such as methods for generalization beyond the training set, learning representations of complex data where variables are disentangled and decision making simplified, and methods of inference that overcome computational intractability.
Speaker Bio: Sargur Srihari is a computer scientist whose work is on automated systems for pattern recognition and machine learning. The principal impact of his work has been on statistical methods, on the analysis and recognition of handwriting and in computational methods for forensic impression evidence.
Sargur Srihari is currently a SUNY Distinguished Professor in the Department of Computer Science and Engineering at the University at Buffalo, The State University of New York where he also holds adjunct professorships in the Department of Biostatistics and in the Department of Electrical Engineering. He teaches courses in machine learning and probabilistic graphical models.
With support from the United States Postal Service for over 20 years, he founded CEDAR, the Center of Excellence for Document Analysis and Recognition, in 1991, which had a major impact. His research led to: (i) the first largescale handwritten address interpretation systems in the world (deployed by the IRS and USPS), (ii) postDaubert court acceptance of handwriting testimony based on handwriting individuality asessment, (iii) a software system in use by forensic document examiners worldwide (iv) statistical characterization of uncertainty in impression evidence, and (v) first characterization of document image analysis as a subfield of pattern recognition.
Srihari's honors include: Outstanding Acheivements Award of IAPR/ICDAR in Beijing China in 2011, Distinguished alumnus of the Ohio State University College of Engineering in 1999. Fellow of the International Association for Pattern Recognition in 1996, Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 1995, and Fellow of the Institute of Electronics and Telecommunications Engineers (IETE, India) in 1992.
Host Faculty: Prof. M. Narasimha Murty
 Series: Ph.D. Colloquium Title: Some Cryptographic Schemes towards Securing Computation over Outsourced Data  Speaker: Mr. Sayantan Mukherjee
Ph D Student
Department of CSA  Faculty Advisor: Prof Sanjit Chatterjee
 Date and Time: Thursday, May 31, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Recent improvements in architecture and networking are fueling evergrowing interest in outsourcing data as well as computation over that data. However, this comes at the cost of trusting the service provider with user data as well as the computation which often is violated without the user's knowledge. Use of conventional encryptions, although ensures confidentiality, bars one to perform any operation on the ciphertext. The question of secure computation over outsourced data comes in both theoretical and practical flavors. We have pursued both the directions and came up with few solutions for selected problems. In addition, we briefly looked into the problem of verification of correctness of the computation an untrusted cloud service provider performs.
The first problem deals with a form of searching where a user is allowed to search for the presence of keyword (w) in only a subset (S) of all files. The difficulty arises from the fact that secret key has to be of constant size even if it encodes both the keyword (w) and permitted set (S) which is a variable size entity.
We give an efficient construction of keyaggregate searchable encryption (KASE) and prove that it is adaptive anonymous secure in the standard model in prime order bilinear groups. As a byproduct, we also get an efficient variable identity broadcast encryption (VIBE) construction which is dual to KASE in terms of functionality.
The KASE has application in file sharing scenarios like Dropbox, Googledrive and VIBE can be used to granularize the privileged customers of a television channel further.
In the second work, we deal with a form of accesscontrol mechanism in which a message is encrypted with respect to a set (S*). The resulting ciphertext can be decrypted with a secret key associated with a set (S) if and only if S is a subset of S*. Such a scheme is called subset predicate encryption (SPE) and can be transformed to construct efficient wildcard IBE (WIBE), wicked IBE (WKDIBE), DNF formula evaluation etc.
We give two constructions of SPE with constant size keys. Moreover, the first achieves constant size ciphertext and the second construction achieves better security namely adaptive security in the standard model.
These constructions are instantiated in composite order and prime order bilinear groups respectively.
Chosen Ciphertext Security (CCA) is the strongest notion of security that is often mandatory in {practical scenario}. As the adversary in CCA model is active, devising a CCAsecure encryption scheme is often hard.
There are few techniques that convert CPAsecure predicate encryptions (PE) to CCAsecurity generically but at a nontrivial variable cost in terms of ciphertext size and additional pairing evaluations during decryption.
We devise two generic constructions of CCAsecure PEs from CPAsecure pair encoding based PEs with a small constant amount of additional cost. The first construction requires only three additional pairing evaluations whereas the second construction reduces this number to only one by means of onetime signature (OTS).
On the more applied aspects, a problem of recent interest is performing wildcard search in the encrypted domain (WcSSE) in the symmetric key settings. Precisely, in WcSSE, the search should find fileidentifiers which contain a keyword (w) that matches with the wildcard queryword (w*) where the match denotes wildcard pattern matching.
Nontriviality of this problem follows from the unrestricted nature of the wildcards allowed in a queryword.
We came up with HAWcS, a provably secure construction of WcSSE in the standard model that performs orders of magnitude better than the stateoftheart as both the theoretical analysis and prototype implementation suggest.
Hidden vector encryption (HVE) is an interesting primitive that allows one to perform range and conjunctive queries. We classified the attributes of a relational database by their type and improved upon an existing work in terms of functionality as well as efficiency.
The final work improves upon an existing technique of authenticated email search.
Precisely, the cloud service provider here has to return an easily verifiable proof of the correctness of the search the user has requested it to perform. Our prototype implementation gives evidence to its practicability.
 Series: Ph.D. Thesis Defense Title: Developing Software Tools using Dynamic Program Analysis  Speaker: Ms. Monika Dhok
Ph.D student
Dept. of CSA  Faculty Advisor: Prof. K Gopinath
 Date and Time: Tuesday, May 29, 2018, 3:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Software development process consists of many stages like design, implementation, and testing. Programmers' intuition and expertise play a vital role in the success of these stages. Therefore, various tools have been developed that assist programmers by automating their tasks. Program analysis, the process of analyzing program behavior by examining its source code or execution, forms the basis of such software tools. We identify and address following three research gaps in the area of program analysis.
Typeawareness in concolic testing
Software testing is a widely used dynamic program analysis technique. Various automated test generation
techniques have been proposed as manually writing effective test suite is challenging. Concolic testing is
one such technique that outputs a test suite for the given program with the intent of maximizing code coverage. This technique is very effective for statically typed languages. We show that applying concolic testing to dynamically typed JS programs causes generation of a large number of test cases that explore paths and types of the variables repeatedly. We propose an effective approach that scales concolic testing by incorporating typeawareness. Our experimental results show that adapting typeaware concolic testing is as effective as conventional concolic testing in detecting bugs and achieving coverage, even with a small percentage of test cases.
Test synthesis to detect redundant traversal bugs
Test generation techniques like concolic testing focus on verifying the functional correctness of the program.
Performance testing, on the other hand, verifies the software efficiency. Our second work focuses on automated performance testing in the context of redundant traversal bugs. Such bugs occur when program fragment performs repetitive computations across loop iterations. Various static and dynamic analysis techniques have been proposed to detect such bugs automatically. However, while the effectiveness of dynamic analyses is dependent on input test cases, static analyses are less useful in validating the presence of this problem. We propose an approach to generate tests for exposing redundant traversal of loops automatically. Our experiments reveal the effectiveness of our approach in generating 224 tests that reveal 46 bugs across seven Java libraries, including 34 previously unknown bugs.
Performance hints for stable multithreaded systems
Assuring correctness and efficiency of sequential programs is easier as compared to multithreaded programs. This is mainly because the execution of multithreaded programs depends not only on the input but also on thread interleavings. Stable multithreaded systems restrict the set of possible schedules such that schedule remains unaffected for the given input. This eases the process of bug reproduction and thereby guaranteeing correctness. However, the determinism imposed in these systems can serialize the schedule resulting into performance degradation. Our third work focuses on automatically deriving performance hints so that such serialization can be avoided. Our experimental results demonstrate that we are able to reduce the overall execution time of programs by up to 34% when compared to the execution time where performance hints are inserted manually.
 Series: Department Seminar Title: Distributed computing in compact message passing.  Speaker: Prof. Amitabh Trehan
Computer Science, Loughborough University, UK  Date and Time: Monday, May 28, 2018, 4:30 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract In world of distributed algorithms, algorithms like compact
routing are designed with the idea of using low memory (sublinear in n 
number of nodes) per node while the most 'efficient' algorithms will
essentially take linear memory per node. Imagine extending this paradigm
to a scenario where the nodes no not even have linear memory but would
still like to support O(n) neighbours  this scenario can be envisaged
for futuristic low memory networks such as Internet of Things, low
memory sensor and wireless networks.
Towards this, we introduce the Compact Message Passing model where we
have an arbitrary connected network (with arbitrary node degrees) of
nodes with o(n) working memory. In each synchronous round, nodes
interleave reading of their input ports with processing and writing
output, thus executing a streaming style computation. We solve some
basic problems in this model and give a 'compact function' for computing
the neighbourhood of labelled binary trees (used for the compact
selfhealing algorithm CompactFT).
Reference: https://arxiv.org/abs/1803.03042
Host Faculty: Dr. Siddharth Barman
 Series: Department Seminar Title: Generalized Pointsto Graphs: A New Abstraction for Memory in the Presence of Pointers  Speaker: Prof. Uday Khedker, IITB, Mumbai
 Date and Time: Wednesday, May 23, 2018, 12:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract 
Scaling flow and contextsensitive exhaustive pointsto analysis is
a challenge that has not been met satisfactorily yet. For topdown
approaches, the problem centers on repeated analysis of the same
procedure; for bottomup approaches, the abstractions used to represent
procedure summaries have not scaled while preserving precision.
Bottomup approaches for pointsto analysis require modelling unknown
pointees accessed indirectly through pointers that may be defined in the
callers.
We propose a novel abstraction called the Generalized Pointsto Graph
(GPG) which views pointsto relations as memory updates and generalizes
them using the counts of indirection levels leaving the unknown pointees
implicit. This allows us to construct GPGs as compact representations
of bottomup procedure summaries in terms of memory updates and control
flow between them. Their compactness is ensured by the following
optimizations: strength reduction reduces the indirection levels,
redundancy elimination removes redundant memory updates and minimizes
control flow (without overapproximating data dependence between
memory updates), and call inlining enhances the opportunities of these
optimizations.
The effectiveness of GPGs lies in the fact that they discard as much
control flow as possible without losing precision (i.e., by preserving
data dependence without overapproximation). This is the reason why the
GPGs are very small even for the main procedures that contain the effect
of entire programs. This allows our implementation to scale to 158kLoC
for C programs.
At a more general level, GPGs provide a convenient abstraction of
memory and memory transformers in the presence of pointers. Future
investigations can try to combine it with other abstractions for static
analyses that can benefit from pointsto information.
Host Faculty: R. Govindarajan
 Series: CSA Faculty Colloquium Title: Vignettes in Model Discovery  Speaker: Prof. Chiranjib Bhattacharyya
Dept. of CSA  Date and Time: Friday, May 18, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract As Data Driven models get increasingly adopted in different disciplines the need for providing guarantees on the quality of learnt models
is increasingly becoming important. Though these problems are intractable but through a judicious mix of domain driven understanding and algorithms we show that in several cases it is possible to characterise the tradeoff between the quality of models and amount of Data. Specifically, we report
recent results in Model discovery arising in different contexts, namely Topic models, Nonnegative matrix factorization, Lasso and Deep networks.
Speaker Bio: Chiranjib Bhattacharyya is currently Professor in the Department of
Computer Science and Automation, Indian Institute of Science. His
research interests are in Machine Learning, Optimisation
and their applications to Industrial problems. Recently he was inducted as a
fellow of Indian Academy of Engineering.
He joined the Department of CSA, IISc, in 2002 as an Assistant
Professor. Prior to joining the Department he was a postdoctoral fellow at
UC Berkeley. He holds BE and ME degrees, both in Electrical Engineering,
from Jadavpur University and the Indian Institute of Science,
respectively, and completed his PhD from the Department of Computer
Science and Automation, Indian Institute of Science. For more information
about his work please see http://mllab. csa. iisc. ernet. in.
Host Faculty: Prof. Sunil L Chandran & Prof. Shalabh Bhatnagar
 Series: Ph.D. Thesis Defense Title: Approximation Algorithms for Geometric Packing and Covering Problems  Speaker: Mr. Aniket Basu Roy
Ph.D student
Dept. of CSA  Faculty Advisor: Prof. Sathish Govindarajan
 Date and Time: Friday, May 11, 2018, 10:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract We study a host of geometric optimization problems that are NPhard and design polynomial time approximation algorithms for them. More precisely, we are given a family of geometric objects and a point set and study different variants and generalizations of Packing and Covering problems. Our objects of study are mostly family of nonpiercing regions in the plane. We call a set of simple and connected regions to be nonpiercing if for any pair of intersecting regions A and B both A B and B A are connected regions. A set of disks, squares, halfplanes are examples of nonpiercing regions, whereas, a set of lines, rectangles are examples of piercing objects. For most of the problems we have studied, a simple local search algorithm is enough to yield a PTAS whose analysis of approximation requires a suitable bipartite graph on the local search solution and the optimal solution to have a balanced sublinear separator.
We study a generalization of the standard packing problem, called the Capacitated Region Packing problem and its slight variant, the Shallow Packing problem. We devise a PTAS for both these problems with restrictions on the capacities. For the former problem, the objects are nonpiercing whereas for the latter problem the objects can be even more general and only have subquadratic union complexity with the capacities at most some constant for both the cases. The nontriviality here is to show that the intersection graph of arrangements with shallow depth, which is not planar, has balanced sublinear separators. To this end, we consider a graph on a point set and use the separator of this graph to obtain a separator of the intersection graph of the arrangement. We also study the Shallow Point Packing problem and are able to show that local search works here as well for unit capacity and devise a constant factor approximation algorithm using an adaptation of BronnimannGoodrich technique for packing problems.
Runaway Rectangle Escape problem is closely related to the above packing problems and is motivated from routing in printed circuit boards. Here we are given a set of axisparallel rectangles inside a rectangular boundary R and a maximum allowed depth d. The objective is to extend the maximum number of input rectangles to one of the four sides of R such that the maximum depth of a point is at most d after extension. We show that local search gives a (2 + epsilon)approximation for d = O(1). When the input rectangles are all disjoint then we devise a simple 4(1 + 1/(d  1))approximation algorithm. We also propose a randomized (1 + epsilon)approximation algorithm based on randomized rounding making some density assumptions. Lastly, we show the problem to be NPhard even when the rectangles are unit squares aligned in a grid.
We study the MultiCover problem which is a generalization of the Set Cover problem. We give a PTAS for nonpiercing regions when the depth of every point is at most constant. We also study different variants of the covering problem like the Unique Coverage, and Prize Collecting Set Cover problem. For Unique Cover, we show that local search yields a PTAS for nonpiercing regions for bounded depth and degree. For Prize Collecting Set Cover a PTAS works for nonpiercing regions if the weight of every region is within a range [1, a], where a is some constant.
Lastly, we consider variants of the Art Gallery problems called the Minimum (Horizontal) Sliding Cameras problem, M(H)SC. We are given an orthogonal polygon and we need to deploy mobile guards that can walk along an orthogonal (horizontal) line segment and can guard a point inside the polygon if the perpendicular drawn from the point onto the line segment lies inside the polygon. Our local search algorithm yields a PTAS for the MHSC problem and also the MSC problem when the polygon has no holes. In order to do so, we prove an appropriate graph on orthogonal line segments to be planar by proposing a graph drawing scheme.

