Juliana D. Adema

Logo

I'm a current Ph.D. student at University of Toronto, studying under the supervision of Dr. Michael Mack. My current research is focused on attentional guidance and visual search.

View My GitHub Profile

Home
News
Contact

NEWS

Poster and Proceedings at Virtual CogSci 2021

Look out for developments in my Python implementation of EBRW at this year’s meeting of the Cognitive Science Society. We will be publishing a paper in the conference proceedings, as well as presenting a poster summarising our approach (date and time TBA).
I’d like to thank by research assistants (and co-authors!), Shuran (Rayna) Tang and Nahal Alizadeh-Saghati, for all of their help getting this together.

Presenting at V-VSS 2021

I’ll be presenting more work on gist-guided attention at this year’s virtual VSS conference. Drop by during the “Visual search” poster session on Wednesday, May 26, 2021, from 8:00 - 10:00 am EDT.


OPAM poster can be viewed online!

Check out my poster at this year’s virtual OPAM conference! If you want to talk to me about this research, I’m hosting a Zoom call on November 19th from 12:30 pm-1:15 pm EST. DM (@j_adema_) or email me for details!
The poster can be viewed here


Mack Lab at Virtual Psychonomics 2020

I am pleased to announce that we will be presenting further work on computational model-based fMRI at this year’s annual meeting of the Psychonomic Society!


Master’s work to be presented at OPAM 2020

Poster PDF and video walkthrough will be uploaded!


Master’s thesis successfully defended!

Stay tuned for my thesis write-up, to be uploaded via Proquest.

Abstract

In addition to saliency and goal-based factors, a scene’s semantic content has been shown to guide attention in visual search tasks. Here, we ask if this rapidly available guidance signal can be leveraged to learn new attentional strategies. In two variants of the scene preview paradigm (Castelhano & Heaven, 2010), participants searched for targets embedded in real-world scenes with target locations linked to scene gist. In one experiment, we found that activating gist with scene previews significantly increased search efficiency in a manner consistent with formal theories of skill acquisition. In the second experiment, search efficiency significantly increased across the experiment; however, this learning effect did not differ between preview condition. Results from a computational model suggest that, when preview information is useful, stimulus features may amplify the similarities and differences between exemplars.

Mack Lab at CogSci 2020

I presented with my colleagues Emily M. Heffernan and Dr. Michael L. Mack at the virtual 42nd Annual Conference of the Cognitive Science Society.
Link to preprint here
Link to proceedings paper here