Monthly Archives: February 2018

NIMH Virtual Workshop on Quantum Computing

NIMH VIRTUAL WORKSHOP:

SOLVING COMPUTATIONAL CHALLENGES IN GENOMICS AND NEUROSCIENCE VIA PARALLEL & QUANTUM COMPUTING

March 28, 2018

9:00 am – 1:00 pm EST

Goal of the workshop

This virtual workshop aims to highlight core computational problems faced by genetics and the subdomains of neuroscience that parallel or quantum computing can address. By bringing together experts in quantum and parallel computing with experts in genetics and neuroscience, we hope to start a dialogue between academic and industry partners working in this area with the focus on algorithm optimization and development. This virtual workshop will be the forum and the nexus to find convergence between cross-disciplinary fields that are operating mostly independently – 1) genomics and neuroscience, and 2) AI/machine learning and 3) quantum computing. The goal is to identify key avenues for computation optimization via parallel and quantum algorithms. This workshop will facilitate the use of state-of-art computational technologies for addressing core bottlenecks in genomics and neuroscience.

Overview

This workshop will cover the following topics with 5 minutes break following each topic discussion:

  • Opening Remarks (10 min)
  • Topic 1: Computational Challenges in Genetics and Neuroscience (1.5 hour)
  • Topic 2: AI, machine learning and parallel computing (45 min)
  • Topic 3: Quantum Algorithms for Accelerated Computation: Opportunities and Challenges (1 hour)
  • Roundtable Discussion & Summary (30 mins)

*NOTE: Some speakers are yet to be confirmed and/or subject to change.

9:00 – 9:10 am:Opening Remarks – Thomas Lehner, Geetha Senthil, Susan

Wright, National Institute of Mental Health, Office of Genomics Research Coordination

Morning Session

Chairs: Alan Anticevic, Ph.D., Yale University and Alan Aspuru-Guzik, Ph.D., Harvard University

Topic 1: Computational Challenges in Genetics and Neuroscience

This session is to highlight where computational challenges/bottlenecks exist at the level of scaling (data and computational features) and computational speedup.

9:10 – 9:25 am: Presentation 1: Genetics and functional genomics

Michael McConnell, Ph.D., University of Virginia, Michael Gandal, M.D., Ph.D., University of California, Los Angeles

9:25 – 9:40 am: Presentation 2: Neurophysiology (processing data, extracting, analysis)

Potential speakers: Mike Halassa, M.D., Ph.D., Massachusetts Institute of Technology

9:40 – 9:55 am: Presentation 3: Neuroimaging

Potential speakers: Alan Anticevic, Ph.D., Yale University, Stephen Smith, Oxford

9:55 – 10:10 am: Presentation 4: Quantitative deep phenotypic analysis

Potential speakers: Andrey Rzhetsky, Ph.D., University of Chicago, Justin Baker, M.D., Ph.D., Massachusetts General Hospital, Jukka-Pekka Onnela, M.Sc., Ph.D., Harvard University

10:10 – 10:25 am: Presentation 5: Computational modeling

Suggested topic: Spiking and neural models and ion channel modelling – spiking network simulation

Speakers: John Murray, Ph.D., Yale University, Michael Hines, Ph.D., Yale University

10:25 – 10:30 am: Break

Topic 2: AI, machine deep learning and parallel computing

This session is to discuss application of state-of-the-art classical parallel computing algorithm applications for machine learning, simulation, & optimization of analysis with ‘big’ data.

10:30 – 10:45 am: Presentation 1: Overview of machine learning via classical and parallel computing technologies

Potential speakers: Guillermo Sapiro, M.Sc., Ph.D., Duke University

10:45 – 11:00 am: Presentation 2: Deep Learning for AI applications – e.g. DeepMind

Potential speakers: Tim Lillicrap, Ph.D., DeepMind

11:00 – 11:15 am: Presentation 3: Parallel processing & GPUs

Suggested topic: Nvidia parallel processing & GPU capabilities for efficient high-performance applications

Potential speakers: Alan will reach out to his contact at Nvidia

11:15 – 11:20 am: Break

Afternoon Session:

Chairs: Aram Harrow, Ph.D., Massachusetts Institute of Technology, and John Murray, Ph.D.,

Yale University

Topic 3: Quantum Algorithms for Accelerated Computation: Opportunities and Challenges

This session will discuss the current state of quantum hardware and algorithms. What kind of advantages (either in terms of speed or solution quality) can be obtained by using quantum machine learning? How close are existing or proposed near-term hardware platforms to being able to implement these algorithms?

11:20 – 11:35 am: Presentation 1: Overview and primer: what is quantum computing good for?

Potential speakers: Alán Aspuru-Guzik, Ph.D., Harvard University

11:35 – 11:50 am: Presentation 2: Status and Prospects for Quantum Hardware

Potential speaker: Nicole Barberis, IBM

11:50 am – 12:05 pm: Presentation 3: Promising Quantum Computing Algorithms on the

Horizon

Potential speakers: Ashley Montanaro, Ph.D., University of Bristol

12:05 – 12:20 pm: Presentation 4: Quantum Machine Learning and Optimization

Seth Lloyd, Massachusetts Institute of Technology

12:20 – 12:30 pm: Break

12:30 – 12:50 pm: Roundtable Discussion & Summary

Moderators: Stefan Bekiranov, University of Virginia & John Murray, Yale University

  • What are the immediate avenues for computation optimization via parallel computing?
  • Which problems are suitable for parallel vs. quantum computing?
  • What are the distinct challenges facing parallel vs quantum computing platforms?
  • Which are the most impactful avenues for quantum algorithm development from the standpoint of neuroscience and genomics?
  • Opportunities for public private partnership?

    12:50 – 1:00 pm: Summary/Closing Remarks

    Potential speakers: Alán Aspuru-Guzik, Harvard University, Alan Anticevic, Yale University

    1:00 pm: Adjourn

NIMH Quantum Computing Virtual Workshop Agenda_02-15-2018.docx

farnam disk usage

total 5.07747E+11 of 600TB
gg487 78814601600
sl857 43706817280
fn64 37239092096
tg397 36298128896
jx98 35683183104
mg888 31885392000
jz435 27797183616
sk972 21908311936
pse5 20335756160
sl2373 15417125248
dl598 15304710144
cs784 13924454016
mr724 11768326784
ll426 8905029760
sl847 8821790592
wum2 8484955008
pmm49 8177639424
jad248 7989755008
yy222 6347266176
rrk24 6182451584
yf9 5816445952
hm444 5719293952
mihali 5459016704
lc848 4090249984
meg98 3984611584
ah633 3367398912
yy532 2965087360
bp272 2906803456
xk4 2412784512
jjl86 1971824768
rdb9 1763952640
msp48 1748680320
as2665 1596345472
ky26 1583088768
ml724 1557992448
jl56 1480538368
ha275 1467031936
jw2394 1423484800
sb238 1275168128
gf3 1189340928
jrb97 1012897664
cy288 879981696
slw67 788305152
pdm32 752482048
lh372 671649152
jsr59 592016256
as898 506352512
dc547 424733696
mpw6 385383040
hz244 374372096
km735 337744640
nb23 324053504
ls926 314810880
xc279 306357504
keckadmins 265108480
aa544 249558400
xl348 237337088
simen 163574272
xz374 162198144
lr579 159751424
yf95 150772480
nmb38 115795456
jjl83 109213440
mas343 96425216
yk336 95688832
williams 95688832
zl222 68034176
wb244 63682432
rka24 59127808
yy448 46536704
aa65 44632832
zc264 43432320
gene760 33406080
mx55 27679616
zhao 25241600
amg89 21919360
co254 21889920
an377 19965312
xm24 19335680
jc2296 17970560
jw72 17455616
njc2 16694016
root 9156608
jk935 6167936
cc59 4636672
law72 3761792
shuch 3039616
yz464 1122176
gene760_2016 475520
bab99 387584
tl444 326144
dr395 185472
jhq4 115584
mj332 60160
rm658 4096
jjp76 3968

Secondary_appt Department-cs CS Colloquium/Danqi Chen, Stanford Univ./Feb. 26, 4pm/AKW 200

CS Colloquium

Monday, February 26

4:00 p.m., AKW 200 (coffee & cookies at 3:45)

Speaker: Danqi Chen, Stanford University

Title: Knowledge from Deep Understanding of Language

Host: Dragomir Radev

Abstract:

Almost all of humanity’s knowledge is now available online, but the vast majority of it is principally encoded in the form of human language explanations. In this talk, I explore novel neural network or deep learning approaches that open up increased opportunities for getting a deep understanding of natural language text. First, I show how distributed representations enabled the building of a smaller, faster, better dependency parser for finding the structure of human language sentences. Then I show how related neural technologies can be used to improve the construction of knowledge bases from text. However, maybe we don’t need this intermediate step and can directly gain knowledge and answer people’s questions from large textbases? In the third part, I explore doing this by looking at a simple but highly effective neural architecture for question answering.

Bio:

Danqi Chen is a PhD student in Computer Science at Stanford
University, working with Christopher Manning on deep learning approaches to Natural Language Processing. Her research centers on how computers can achieve a deep understanding of human language and the information it contains. Danqi received Outstanding Paper Awards at ACL 2016 and EMNLP 2017, a Facebook Fellowship, a Microsoft Research Women’s Fellowship and an Outstanding Course Assistant Award from Stanford. She holds a B.E. with honors from Tsinghua University.

Secondary_appt Department-cs CS Colloquium/Kevin Fu, Univ. of Michigan/Feb. 27/4:00 p.m., AKW 200

CS Colloquium

Tuesday, February 27, 2018

4:00 p.m., AKW 200 (coffee & cookies at 3:45)

Speaker: Kevin Fu, University of Michigan

Title: Analog Cybersecurity and Transduction Attacks

Host: Zhong Shao

Abstract:

Medical devices, autonomous vehicles, and the Internet of Things depend on the integrity and availability of trustworthy data from sensors to make safety-critical, automated decisions. How can such cyberphysical systems remain secure against an adversary using intentional interference to fool sensors? Building upon classic research in cryptographic fault injection and side channels, research in analog cybersecurity explores how to protect digital computer systems from physics-based attacks. Analog cybersecurity risks can bubble up into operating systems as bizarre, undefined behavior. For instance, transduction attacks exploit vulnerabilities in the physics of a sensor to manipulate its output. Transduction attacks using audible acoustic, ultrasonic, or radio interference can inject chosen signals into sensors found in devices ranging from fitbits to implantable medical devices to drones and smartphones.

Why do microprocessors blindly trust input from sensors, and what can be done to establish trust in unusual input channels in cyberphysical systems? Why are students taught to hold the digital abstraction as sacrosanct and unquestionable? Come to this talk to learn about undefined behavior in basic building blocks of computing. I will also suggest educational opportunities for embedded security and discuss how to design out analog cybersecurity risks by rethinking the computing stack from electrons to bits. This work brings some closure to my curiosity on why my cordless phone would ring whenever I executed certain memory operations on the video graphics chip of an Apple IIGS.

Biography:

Kevin Fu is Associate Professor of EECS at the University of Michigan where he directs the Security and Privacy Research Group
(SPQR.eecs.umich.edu) and the Archimedes Center for Medical Device Security (secure-medicine.org). His research focuses on analog cybersecurity—how to model and defend against threats to the physics of computation and sensing. His embedded security research interests span from the physics of cybersecurity through the operating system to human factors. Past research projects include MEMS sensor security, pacemaker/defibrillator security, cryptographic file systems, web authentication, RFID security and privacy, wirelessly powered sensors, medical device safety, and public policy for information security & privacy.

Kevin was recognized as an IEEE Fellow, Sloan Research Fellow, MIT Technology Review TR35 Innovator of the Year, and recipient of a Fed100 Award and NSF CAREER Award. He received best paper awards from USENIX Security, IEEE S&P, and ACM SIGCOMM. He co-founded healthcare cybersecurity startup Virta Labs. Kevin has testified in the House and Senate on matters of information security and has written commissioned work on trustworthy medical device software for the National Academy of Medicine. He is a member the Computing Community Consortium Council, ACM Committee on Computers and Public Policy, and the USENIX Security Steering Committee. He advises the American Hospital Association and Heart Rhythm Society on matters of healthcare cybersecurity. Kevin previously served as program chair of USENIX Security, a member of the NIST Information Security and Privacy Advisory Board, a visiting scientist at the Food & Drug
Administration, and an advisor for Samsung’s Strategy and Innovation Center. Kevin received his B.S., M.Eng., and Ph.D. from MIT. He earned a certificate of artisanal bread making from the French Culinary Institute.

Mbbfacultyall MB&B Dissertation Seminar – Michael Lacy (Julien Berro, Advisor) – Friday, March 2, 305 Bass, 2:00 pm

MB&B Dissertation Seminar (Flyer attached)

Speaker: Michael Lacy (Julien Berro, Advisor)
Title: “Single-molecule dynamics in clathrin-mediated endocytosis and membrane remodeling”
Date: Friday, March 2, 2018
Time & 2:00 pm
Place: 305 Bass

Tea at 1:45 pm
Lacy dissertation flyer.pdf

Statseminars Stat & Data Science Seminar, Speaker: Aaditya Ramdas, Monday, 2/26 @ 4:15pm

DEPARTMENT OF STATISTICS AND DATA SCIENCE SEMINAR

Date: Monday, February 26, 2018

Time: 4:15pm – 5:15pm

Place: 24 Hillhouse Avenue, Rm. 107

Seminar Speaker: Aaditya Ramdas

University of California, Berkeley, http://people.eecs.berkeley.edu/~aramdas/

Title: Interactive algorithms for multiple hypothesis testing

Abstract: Data science is at a crossroads. Each year, thousands of new data scientists are entering science and technology, after a broad training in a variety of fields. Modern data science is often exploratory in nature, with datasets being collected and dissected in an interactive manner. Classical guarantees that accompany many statistical methods are often invalidated by their non-standard interactive use, resulting in an underestimated risk of falsely discovering correlations or patterns. It is a pressing challenge to upgrade existing tools, or create new ones, that are robust to involving a human-in-the-loop. In this talk, I will describe two new advances that enable some amount of interactivity while testing multiple hypotheses, and control the resulting selection bias. I will first introduce a new framework, STAR, that uses partial masking to divide the available information into two parts, one for selecting a set of potential discoveries, and the other for inference on the selected set. I will then show that it is possible to flip the traditional roles of the algorithm and the scientist, allowing the scientist to make post-hoc decisions after seeing the realization of an algorithm on the data. The theoretical basis for both advances is founded in the theory of martingales : in the first, the user defines the martingale and associated filtration interactively, and in the second, we move from optional stopping to optional spotting by proving uniform concentration bounds on relevant martingales.

This talk will feature joint work with (alphabetically) Rina Barber, Jianbo Chen, Will Fithian, Kevin Jamieson, Michael Jordan, Eugene Katsevich, Lihua Lei, Max Rabinovich, Martin Wainwright, Fanny Yang and Tijana Zrnic. Bio : Aaditya Ramdas is a postdoctoral researcher in Statistics and EECS at UC Berkeley, advised by Michael Jordan and Martin Wainwright. He finished his PhD in Statistics and Machine Learning at CMU, advised by Larry Wasserman and Aarti Singh, winning the Best Thesis Award in Statistics. A lot of his research focuses on modern aspects of reproducibility in science and technology — involving statistical testing and false discovery rate control in static and dynamic settings.

4:00 p.m. Refreshments in Common Room, 24 Hillhouse Avenue

4:15p.m. – 5:15p.m. Seminar, Room 107, 24 Hillhouse Avenue

For more details and upcoming events visit our website at
http://statistics.yale.edu/ .

farnam disk usage

total 4.93542E+11 of 600 TB
gg487 80346153088
sl857 42453918080
fn64 37230865408
jx98 34102180352
mg888 31884811008
jz435 27793173504
tg397 26743508608
sk972 21909421952
pse5 20333742208
sl2373 15417125248
dl598 15304594048
cs784 13923634560
mr724 11768326784
ll426 8905029760
sl847 8821790592
wum2 8420884352
pmm49 8177639424
jad248 7989755008
yy222 6347266176
rrk24 6182451584
yf9 5816445952
hm444 5719293568
mihali 5459016704
lc848 4090249984
meg98 3984611584
ah633 3367398912
bp272 2906803456
xk4 2393468032
jjl86 1928689024
rdb9 1763952640
msp48 1748680320
as2665 1596345472
ky26 1583088768
ml724 1557992448
jl56 1480538368
ha275 1467031936
jw2394 1423484800
sb238 1275168128
gf3 1189340928
jrb97 1012897664
cy288 876665856
slw67 788305152
pdm32 752088448
lh372 671649152
jsr59 592016256
as898 506352512
dc547 424654976
mpw6 385383040
hz244 374372096
km735 337744640
nb23 324053504
ls926 314810880
keckadmins 265108480
aa544 249558400
xl348 237337088
simen 163574272
xz374 162198144
lr579 159751424
yf95 150772480
nmb38 115795456
jjl83 109213440
mas343 96425216
yk336 95688832
williams 95688832
xc279 85381888
zl222 68034176
wb244 63682432
rka24 59127808
yy448 46536704
aa65 44632832
zc264 43432192
gene760 33406080
zhao 25241600
amg89 21919360
co254 21889920
an377 19965312
xm24 19335680
jc2296 17970560
jw72 17455616
njc2 16694016
mx55 11160960
root 9156608
jk935 6167936
cc59 4636672
law72 3522560
shuch 3039616
yz464 1122176
gene760_2016 475520
bab99 387584
tl444 326144
dr395 185472
jhq4 115584
mj332 60160
rm658 4096
jjp76 3968