CORNIA, MARCELLA
 Distribuzione geografica
Continente #
NA - Nord America 9.439
AS - Asia 9.284
EU - Europa 8.035
SA - Sud America 886
AF - Africa 179
OC - Oceania 51
Continente sconosciuto - Info sul continente non disponibili 16
Totale 27.890
Nazione #
US - Stati Uniti d'America 9.148
IT - Italia 3.854
SG - Singapore 2.626
CN - Cina 2.286
HK - Hong Kong 1.052
GB - Regno Unito 974
TR - Turchia 802
VN - Vietnam 792
DE - Germania 685
BR - Brasile 656
FR - Francia 456
SE - Svezia 431
KR - Corea 415
JP - Giappone 282
FI - Finlandia 265
RU - Federazione Russa 252
NL - Olanda 231
IN - India 192
CA - Canada 163
ID - Indonesia 162
IE - Irlanda 135
ES - Italia 123
TW - Taiwan 119
BD - Bangladesh 113
UA - Ucraina 99
MX - Messico 94
AR - Argentina 82
AT - Austria 82
IQ - Iraq 61
BE - Belgio 59
CH - Svizzera 58
PL - Polonia 56
BG - Bulgaria 50
ZA - Sudafrica 50
AU - Australia 43
MY - Malesia 43
PK - Pakistan 38
AE - Emirati Arabi Uniti 37
LT - Lituania 37
SA - Arabia Saudita 37
RO - Romania 34
PT - Portogallo 33
EC - Ecuador 31
DK - Danimarca 30
IL - Israele 29
PE - Perù 25
EG - Egitto 24
KE - Kenya 24
CL - Cile 23
VE - Venezuela 23
CO - Colombia 22
MA - Marocco 20
JO - Giordania 19
UZ - Uzbekistan 19
PH - Filippine 18
GR - Grecia 17
DZ - Algeria 16
TH - Thailandia 16
EU - Europa 15
NP - Nepal 15
TN - Tunisia 15
KZ - Kazakistan 14
CZ - Repubblica Ceca 13
PY - Paraguay 12
IR - Iran 11
OM - Oman 11
AZ - Azerbaigian 10
LU - Lussemburgo 10
BZ - Belize 8
ET - Etiopia 8
RS - Serbia 8
AM - Armenia 7
KH - Cambogia 7
NZ - Nuova Zelanda 7
SC - Seychelles 7
UY - Uruguay 7
MO - Macao, regione amministrativa speciale della Cina 6
SK - Slovacchia (Repubblica Slovacca) 6
AL - Albania 5
BA - Bosnia-Erzegovina 5
DO - Repubblica Dominicana 5
HR - Croazia 5
LB - Libano 5
SY - Repubblica araba siriana 5
BH - Bahrain 4
BO - Bolivia 4
GE - Georgia 4
GT - Guatemala 4
KG - Kirghizistan 4
KW - Kuwait 4
LK - Sri Lanka 4
LV - Lettonia 4
MD - Moldavia 4
MT - Malta 4
PS - Palestinian Territory 4
BB - Barbados 3
BY - Bielorussia 3
CR - Costa Rica 3
HU - Ungheria 3
JM - Giamaica 3
Totale 27.849
Città #
Singapore 1.642
Santa Clara 989
Ashburn 927
Hong Kong 857
Elâzığ 718
Hefei 715
Fairfield 661
Modena 602
Chandler 505
San Jose 480
Southend 436
Beijing 346
Seattle 315
Houston 295
Seoul 280
Woodbridge 280
Milan 264
Bologna 261
Ho Chi Minh City 253
London 245
Los Angeles 245
Cambridge 237
Wilmington 233
Nyköping 227
Ann Arbor 214
Helsinki 174
Hanoi 171
Buffalo 158
Rome 140
Chicago 132
Tokyo 129
New York 122
Jakarta 121
Dearborn 116
Boardman 115
The Dalles 114
Reggio Emilia 113
Dublin 107
Jacksonville 105
Lauterbourg 96
Parma 96
Council Bluffs 90
San Diego 88
Shanghai 84
Munich 82
São Paulo 76
Florence 74
Amsterdam 69
Nuremberg 69
Frankfurt am Main 67
Princeton 57
Taipei 57
Turin 57
Redwood City 54
Orem 53
Bomporto 52
Montreal 50
Sofia 48
Mexico City 47
Bremen 46
Moscow 46
Salt Lake City 46
Kent 45
Dallas 44
Paris 42
Pisa 42
Da Nang 41
Falkenstein 41
Phoenix 39
Brussels 38
Chennai 37
Warsaw 37
Naples 36
Vienna 36
Dong Ket 35
Palermo 35
Izmir 33
Toronto 33
Eugene 31
Guangzhou 30
Haiphong 29
Düsseldorf 28
Formigine 28
Johannesburg 28
Lappeenranta 28
Zurich 27
Dhaka 26
Falls Church 26
Manchester 26
Seo-gu 26
Copenhagen 25
Hangzhou 25
Ottawa 25
Bari 24
Biên Hòa 23
Central 23
Fremont 23
Nairobi 23
Baghdad 22
Berlin 22
Totale 16.730
Nome #
What was Monet seeing while painting? Translating artworks to photo-realistic images 642
Attentive Models in Vision: Computing Saliency Maps in the Deep Learning Era 563
MissRAG: Addressing the Missing Modality Challenge in Multimodal Large Language Models 555
Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach 545
Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models 504
Attentive Models in Vision: Computing Saliency Maps in the Deep Learning Era 496
Towards Cycle-Consistent Models for Text and Image Retrieval 489
Artpedia: A New Visual-Semantic Dataset with Visual and Contextual Sentences in the Artistic Domain 472
Modeling Multimodal Cues in a Deep Learning-based Framework for Emotion Recognition in the Wild 463
Automatic Image Cropping and Selection using Saliency: an Application to Historical Manuscripts 442
Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions 415
Imparare a descrivere gli oggetti salienti presenti nelle immagini tramite la visione e il linguaggio 414
Learning to Read L'Infinito: Handwritten Text Recognition with Synthetic Training Data 402
Aligning Text and Document Illustrations: towards Visually Explainable Digital Humanities 400
M-VAD Names: a Dataset for Video Captioning with Naming 398
Dress Code: High-Resolution Multi-Category Virtual Try-On 394
A Deep Multi-Level Network for Saliency Prediction 391
Explaining Digital Humanities by Aligning Images and Textual Descriptions 384
Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model 383
FashionSearch++: Improving Consumer-to-Shop Clothes Retrieval with Hard Negatives 380
Recognizing social relationships from an egocentric vision perspective 376
Image-to-Image Translation to Unfold the Reality of Artworks: an Empirical Analysis 356
Unveiling the Impact of Image Transformations on Deepfake Detection: An Experimental Analysis 355
Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation 355
Dual-Branch Collaborative Transformer for Virtual Try-On 349
Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation 348
Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs 339
SAM: Pushing the Limits of Saliency Prediction Models 339
The Revolution of Multimodal Large Language Models: A Survey 337
Benchmarking BERT-based Models for Latin: A Case Study on Biblical References in Ancient Christian Literature 332
Multi-Level Net: a Visual Saliency Prediction Model 331
Visual Saliency for Image Captioning in New Multimedia Services 330
Dress Code: High-Resolution Multi-Category Virtual Try-On 328
Transform, Warp, and Dress: A New Transformation-Guided Model for Virtual Try-On 324
From Show to Tell: A Survey on Deep Learning-based Image Captioning 314
Explore and Explain: Self-supervised Navigation and Recounting 313
SynthCap: Augmenting Transformers with Synthetic Data for Image Captioning 313
A Novel Attention-based Aggregation Function to Combine Vision and Language 312
Multimodal Attention Networks for Low-Level Vision-and-Language Navigation 302
Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention 299
Towards Video Captioning with Naming: a Novel Dataset and a Multi-Modal Approach 296
Meshed-Memory Transformer for Image Captioning 290
CaMEL: Mean Teacher Learning for Image Captioning 281
VITON-GT: An Image-based Virtual Try-On Model with Geometric Transformations 277
Embodied Agents for Efficient Exploration and Smart Scene Description 276
Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities 272
A Unified Cycle-Consistent Neural Model for Text and Image Retrieval 262
Retrieval-Augmented Transformer for Image Captioning 257
Investigating Bidimensional Downsampling in Vision Transformer Models 255
Boosting Modern and Historical Handwritten Text Recognition with Deformable Convolutions 254
Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval 249
Adapt to Scarcity: Few-Shot Deepfake Detection via Low-Rank Adaptation 245
Learning to Select: A Fully Attentive Approach for Novel Object Captioning 237
Focus on Impact: Indoor Exploration with Intrinsic Motivation 237
Semantically Conditioned Prompts for Visual Recognition under Missing Modality Scenarios 232
Fashion-RAG: Multimodal Fashion Image Editing via Retrieval-Augmented Generation 232
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis 231
Embodied Navigation at the Art Gallery 231
The Unreasonable Effectiveness of CLIP features for Image Captioning: an Experimental Analysis 230
Modeling Human Gaze Behavior with Diffusion Models for Unified Scanpath Prediction 224
Are Learnable Prompts the Right Way of Prompting? Adapting Vision-and-Language Models with Memory Optimization 221
BRIDGE: Bridging Gaps in Image Captioning Evaluation with Stronger Visual Cues 220
OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data 218
Matching Faces and Attributes Between the Artistic and the Real Domain: the PersonArt Approach 217
ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval 216
The LAM Dataset: A Novel Benchmark for Line-Level Handwritten Text Recognition 210
With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning 208
Working Memory Connections for LSTM 207
Towards Explainable Navigation and Recounting 206
LaDI-VTON: Latent Diffusion Textual-Inversion Enhanced Virtual Try-On 203
Fashion-Oriented Image Captioning with External Knowledge Retrieval and Fully Attentive Gates 199
TPP-Gaze: Modelling Gaze Dynamics in Space and Time with Neural Temporal Point Processes 195
Personalizing Multimodal Large Language Models for Image Captioning: An Experimental Analysis 189
Spot the Difference: A Novel Task for Embodied Agents in Changing Environments 187
Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation 186
Unlearning Vision Transformers without Retaining Data via Low-Rank Decompositions 185
Verifier Matters: Enhancing Inference-Time Scaling for Video Diffusion Models 184
Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering 177
SMArT: Training Shallow Memory-aware Transformers for Robotic Explainability 176
Personalized Instance-based Navigation Toward User-Specific Objects in Realistic Environments 175
Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing 175
Video Surveillance and Privacy: A Solvable Paradox? 170
Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization 170
Fluent and Accurate Image Captioning with a Self-Trained Reward Model 165
Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images 163
Explaining Transformer-based Image Captioning Models: An Empirical Analysis 162
Out of the Box: Embodied Navigation in the Real World 161
Trends, Applications, and Challenges in Human Attention Modelling 158
Mitigating Hallucinations in Multimodal LLMs via Object-aware Preference Optimization 149
Computer Vision in Human Analysis: From Face and Body to Clothes 145
Towards Retrieval-Augmented Architectures for Image Captioning 145
Generating More Pertinent Captions by Leveraging Semantics and Style on Multi-Source Datasets 138
Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images 137
Augmenting and Mixing Transformers with Synthetic Data for Image Captioning 130
Multi-Class Unlearning for Image Classification via Weight Filtering 127
Sketch2Stitch: GANs for Abstract Sketch-Based Dress Synthesis 122
Image Captioning Evaluation in the Age of Multimodal LLMs: Challenges and Future Perspectives 116
Learning to Mask and Permute Visual Tokens for Vision Transformer Pre-Training 113
What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models 110
Pixels of Faith: Exploiting Visual Saliency to Detect Religious Image Manipulation 108
Totale 27.695
Categoria #
all - tutte 91.157
article - articoli 0
book - libri 0
conference - conferenze 0
curatela - curatele 0
other - altro 0
patent - brevetti 0
selected - selezionate 0
volume - volumi 0
Totale 91.157


Totale Lug Ago Sett Ott Nov Dic Gen Feb Mar Apr Mag Giu
2020/2021569 0 0 0 0 0 0 0 0 0 296 121 152
2021/20222.415 117 119 124 122 84 107 155 187 244 324 575 257
2022/20232.147 269 223 186 179 244 187 85 168 304 64 134 104
2023/20241.987 254 148 216 215 270 104 97 118 61 175 135 194
2024/20256.595 626 211 216 378 948 682 328 538 745 585 624 714
2025/202610.818 1.065 613 979 1.185 1.574 676 1.325 1.270 996 1.135 0 0
Totale 28.378