Human-robot collaboration promises to free the human to multitask and engage in cognitive work while the robots assists with physical tasks, therefore increasing productivity. However, this collaborative paradigm requires continuous attention from human operators, which could potentially strain their cognitive resources. Excessive attention demands can lead to safety hazards, increased errors, and reduced efficiency. Despite its critical importance, there is limited empirical research on attentional factors in industrial human-robot collaboration. In this study, we explore attentional multitasking in collaborative human-robot assembly settings. Our experimental setup involves participants performing a wire harnessing task with a collaborative robot while simultaneously completing a Go/No-Go test as a secondary task. To observe the effect of multitasking, we varied the difficulty of the secondary task across two levels and analysed its impacts on work performance and workload. Our results confirm threaded cognition theory, suggesting that human-robot collaboration could reduce cognitive capacity by depleting attentional resources, leading to higher errors and cycle times during multitasking. This underscores the importance of a detailed understanding of attentional factors in human-robot collaboration. We discuss our findings and their implications, and provide insights into the adjustment and design of human-robot collaboration tasks in the industry.
What the experiment is
A within-subjects user study (N=16) on attentional multitasking during collaborative assembly. Each participant performed a wire-harnessing task together with a UR5e cobot (primary task) while responding to a visual Go/No-Go test on a screen (secondary task). The secondary task had two difficulty levels:
(1) Black–white: press a foot pedal when the screen turns black (rare, unpredictable).
(2) Multi-colour: distinguish Go (black) from No-Go colours (grey/brown/white), imposing higher attentional demand.
Session order was counterbalanced (eight started with black–white; eight with multi-colour). Outcomes included task completion time, assembly errors, secondary-task hits/misses/false alarms, and NASA RTLX workload after each session; a subset of participants gave qualitative feedback in post-interviews.
Files description
1st block.xlsx and 2nd block.xlsx — Secondary-task performance logs
Structure: one sheet per participant; 1st block covers Participants 1–8, 2nd block covers 9–16.
Rows: two rows per participant (one per session/round).
Columns (identical in both files):
participant – participant ID
Round – session number (1 or 2; order is counterbalanced)
Number of Yes – correct Go responses (hits)
Number of No – false alarms (responses to No-Go)
Number of Misses – missed Go stimuli
Duration (min) – session duration in minutes
NasaTLX-1st block.xlsx and NasaTLX-2nd block.xlsx — Per-participant NASA RTLX
Structure: one sheet per participant (participants 1–8 in “1st block”, 9–16 in “2nd block”).
What you’ll see in each sheet: a standard NASA RTLX worksheet.
Final results.xlsx — Aggregated table across sessions
Sheet: Full_Data (two rows per participant; total 32 session-rows for 16 participants).
Columns:
Experiment – session-order code (e.g., 1-2 or 2-1, indicating which difficulty came first; 1 = Black–white, 2 = Multi-colour)
Participant, Gender, Age, Round (1 or 2)
Secondary-task counts: Number of Yes, Number of No, Number of Misses
Duration (min)
Primary-task (assembly) error categories: loosely connected, mismatch, missed socket, not connected (counts)
workload (100) – NASA RTLX composite
Qualitative_Participants_feedback.docx — Post-interview notes (subset)
Content: brief per-participant tables (hits/false alarms/misses/time/primary-errors/NASA load) plus narrative comments on strategy adaptation (e.g., prioritizing one task), learning effects, and fatigue.