<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0" xml:lang="ja">
	<channel>
		<title>HASCA2015</title>
		<link>http://hasca2015.hasc.jp/</link>
		<atom:link href="http://hasca2015.hasc.jp/rss2.xml" rel="self" type="application/rss+xml" />
		<description>3rd International Workshop on Human Activity Sensing Corpus and its Application</description>
		<language>ja</language>
		<copyright>Copyright (C) 2026 HASCA2015 All rights reserved.</copyright>
		<lastBuildDate>Thu, 09 Oct 2025 18:49:39 +0900</lastBuildDate>
		<generator>a-blog cms</generator>
		<docs>http://blogs.law.harvard.edu/tech/rss</docs>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Organizers &amp; Committee</title>
			<link>http://hasca2025.hasc.jp/pc/index.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<h2 id="h694">ORGANIZERS</h2>
				

				
			
				
				
				<ul >
<li>Kazuya MURAO (Ritsumeikan University, Japnan)</li>
<li>Yu ENOKIBORI (Nagoya University, Japan)</li>
<li>Hristijan GJORESKI (Ss. Cyril and Methodius University, N. Macedonia)</li>
<li>Paula LAGO (Concordia University, Canada)</li>
<li>Tsuyoshi OKITA (Kyushu Institute Technology, Japan)</li>
<li>Pekka SIIRTOLA (University of Oulu, Finland)</li>
<li>Kei HIROI (Kyoto University, Japan)</li>
<li>Philipp M. SCHOLL (University of Freiburg, Germany)</li>
<li>Mathias CILIBERTO (University of Sussex, UK)</li>
<li>Kenta URANO (Nagoya University, Japan)</li>
<li>Marius Bock (University of Siegen, Germany)</li>
</ul>
				

				
			
				
				
				<h2 id="h696">ADVISORY BOARDS</h2>
				

				
			
				
				
				<ul >
<li>Nobuo Kawaguchi (Nagoya University, Japan)</li>
<li>Nobuhiko Nishio (Ritsumeikan University, Japan)</li>
<li>Daniel Roggen (University of Sussex, UK)</li>
<li>Sozo Inoue (Kyushu Institute of Technology, Japan)</li>
<li>Susanna Pirttikangas (University of Oulu, Finland)</li>
<li>Kristof van Laerhoven (University of Freiburg, Germany)</li>
</ul>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>pc</category>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/pc/index.html</guid>
			<pubDate>Thu, 01 May 2025 16:34:22 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Call for Contributions</title>
			<link>http://hasca2025.hasc.jp/cfp/entry-64.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p >We are pleased to announce that the HASCA (Human Activity Sensing Corpus and Applications) Workshop will take place as part <a href="https://www.ubicomp.org/ubicomp-iswc-2025/" target="_blank">Ubicomp2025</a>.<br />
HASCA is one of the largest workshops in Ubicomp, it has been held over 13 years.</p>
				

				
			
				
				
				<h2 id="h705">Dates</h2>
				

				
			
				
				
				<p >Submission Deadline: June <s>15 22 (ext.)</s><b>29 (ext.)</b><br />
* for submission after 23rd, please follow the aanouncement in the submission site.<br />
<font color="red">* (update at June 8th) submissions are now open for papers rejected at ISWC. Please submit your paper, ISWC review results, and the letter explaining the revision policy. Submission Deadline: July 13 (AoE)</font><br />
Acceptance Notification: July 16<br />
Camera-ready: July 31 <strong>HARD</strong><br />
Workshop: Oct. 12 (Room U3)<br />
<br />
For SHL Challenge and WEAR challenge, please check each challenge's conditions as dates may differ.</p>
				

				
			
				
				
				<h2 id="h707">SUMMARY</h2>
				

				
			
				
				
				<p >The objective of this workshop is to share the experiences among<br />
researchers about current challenges of real-world activity<br />
recognition with newly developed datasets and tools, breaking through<br />
towards open-ended contextual intelligence.<br />
<br />
This workshop discusses the challenges of designing reproducible<br />
experimental setups, the large-scale dataset collection campaigns, the<br />
activity and context recognition methods that are robust and adaptive,<br />
and evaluation systems in the real world.<br />
<br />
As a special topic of this year we will reflect on the challenges to<br />
recognize situations, events and/or activities among the statically<br />
predefined pools and beyond - which is the current state of the art -<br />
and instead we will adopt an "open-ended view" on activity and context<br />
awareness. This may result in combinations of the automatic discovery<br />
of relevant patterns in sensor data, the experience sampling and<br />
wearable technologies to unobtrusively discover the semantic meaning<br />
of such patterns, the crowd-sourcing of dataset acquisition and<br />
annotation, and new "open-ended" human activity modeling techniques.</p>
				

				
			
				
				
				<h2 id="h709">CALL FOR CONTRIBUTIONS</h2>
				

				
			
				
				
				<p ><strong>- *Data collection*, *Corpus construction*.</strong><br />
Experiences or reports from data collection and/or corpus construction<br />
projects, including papers which describes the formats, styles and/or<br />
methodologies for data collection. Cloud-sourcing data collection and<br />
participatory sensing also could be included in this topic.<br />
<br />
<strong>- *Effectiveness of Data*, *Data Centric Research*.</strong><br />
There is a field of research based on the collected corpora, which is<br />
so called "data centric research". Also, we call for the experience of<br />
using large-scale human activity sensing corpora. Using large-scale<br />
corpora with an analysis by machine learning, there will be a large<br />
space for improving the performance of recognition results.<br />
<br />
<strong>- *Tools and Algorithms for Activity Recognition*.</strong><br />
If we have appropriate tools for the management of sensor data,<br />
activity recognition researchers could have more focused on their<br />
actual research theme. This is because the developed tools and<br />
algorithms are often not shared among the research community. In this<br />
workshop, we solicit reports on developed tools and algorithms for<br />
forwarding to the community.<br />
<br />
<strong>- *Real World Application and Experiences*.</strong><br />
Activity recognition "in the lab" usually works well. However, it does<br />
not scale well with real world data. In this workshop, we also solicit<br />
the experiences from real world applications. There is a huge gap<br />
between "lab" and "real world” environments . Large-scale human<br />
activity sensing corpora will help to overcome this gap.<br />
<br />
<strong>- *Sensing Devices and Systems*.</strong><br />
Data collection is not only performed by the "off-the-shelf" sensors<br />
but also the newly developed sensors which supply information which<br />
has not been investigated. There is also a research area about the<br />
development of new platform for data collection or the evaluation<br />
tools for collected data.<br />
<br />
In light of this year's special emphasis on open-ended contextual<br />
awareness, we wish cover these topics as well:<br />
<br />
<strong>- *Mobile Experience Sampling*, *Experience Sampling Strategies*.</strong><br />
Advances in experience sampling approaches, for instance intelligent<br />
user query or those using novel devices (e.g. smartwatches), are<br />
likely to play an important role to provide user-contributed<br />
annotations of their own activities.<br />
<br />
<strong>- *Unsupervised Pattern Discovery*.</strong><br />
Discovering meaningful patterns in sensor data in an unsupervised<br />
manner can be needed in the context of informing other elements of the<br />
system by inquiring the user and by triggering the annotation with<br />
crowd-sourcing.<br />
<br />
<strong>- *Dataset Acquisition and Annotation*, *Crowd-Sourcing*, *Web-Mining*.</strong><br />
A wide abundance of sensor data is potentially within the reach of<br />
users instrumented with their mobile phones and other<br />
wearables. Capitalizing on crowd-sourcing to create larger datasets in<br />
a cost effective manner may be critical to open-ended activity<br />
recognition. Many online datasets are also available and could be used<br />
to bootstrap recognition models.<br />
<br />
<strong>- *Transfer Learning*, *Semi-Supervised Learning*, *Lifelog Learning*.</strong><br />
The ability to translate recognition models across modalities or to<br />
use minimal forms of supervision would allow to reuse datasets in a<br />
wider range of domains and reduce the costs of acquiring annotations.<br />
<br />
<strong>- *Deep Learning*.</strong><br />
Together with the big success of deep learning in other AI domain, deep<br />
learning models are gradually playing an important role in activity<br />
recognition as well.</p>
				

				
			
				
				
				<h2 id="h711">AREAS OF INTEREST</h2>
				

				
			
				
				
				<ul >
<li>Human Activity Sensing Corpus</li>
<li>Large Scale Data Collection</li>
<li>Data Validation</li>
<li>Data Tagging / Labeling</li>
<li>Efficient Data Collection</li>
<li>Data Mining from Corpus</li>
<li>Automatic Segmentation</li>
<li>Performance Evaluation</li>
<li>Man-machine Interaction</li>
<li>Noise Robustness</li>
<li>Non Supervised Machine Learning</li>
<li>Sensor Data Fusion</li>
<li>Tools for Human Activity Corpus/Sensing</li>
<li>Participatory Sensing</li>
<li>Feature Extraction and Selection</li>
<li>Context Awareness</li>
<li>Pedestrian Navigation</li>
<li>Social Activities Analysis/Detection</li>
<li>Compressive Sensing</li>
<li>Sensing Devices</li>
<li>Lifelog Systems</li>
<li>Route Recognition/Detection</li>
<li>Wearable Application</li>
<li>Gait Analysis</li>
<li>Health-care Monitoring/Recommendation</li>
<li>Daily-life Worker Support</li>
<li>Deep Learning</li>
</ul>
				

				
			
				
				
				<h2 id="h713">FORMAT & TEMPLATE</h2>
				

				
			
				
				
				<p ><b>The paper must be in 6 pages <s>including references</s> in the 2-column format. References do not count to the page limit, but all texts and figures/tables must be in the first 6 pages.</b> Due to capacity reasons, some papers may be accepted as poster presentations during the workshop (not UbiComp/ISWC poster sessions) instead of oral presentations. We also plan to open the submissions for the papers rejected by the ISWC Note/Brief.<br />
(Update at Jun. 4: page limitation has been changed to fit with ISWC notes/briefs)<br />
<br />
ACM requires UbiComp/ISWC 2025 workshop submissions to use the double-column template. Please note that the template for submission is double-column format and the template for publication (camera-ready) is in single-column.<br />
Please carefully read <a href="https://www.ubicomp.org/ubicomp-iswc-2025/authors/formatting/" target="_blank">Ubicomp website about the template</a>.<br />
<br />
<b>Submissions do not need to be anonymous</b>.<br />
All publications will be peer reviewed together with their contribution to the topic of the workshop.<br />
The accepted papers will be published in the UbiComp/ISWC 2025 adjunct proceedings, which will be included in the ACM Digital Library.<br />
</p>
				

				
			
				
				
				<h2 id="h715">SUBMISSION</h2>
				

				
			
				
				
				<p ><s>AS of May 13, submission site is not open. DEtails below will be updated after submission site is ready.</s><br />
At June 4, submission site is open!<br />
<br />
Please submit your papers from <a href="https://new.precisionconference.com/submissions" target="_blank" rel="noopener noreferrer">https://new.precisionconference.com/submissions</a><br />
Make a new submission as follows:</p>
				

				
			
				
				
				<ol >
<li>Society: SIGCHI</li>
<li>Conference/Journal: UbiComp/ISWC 2025</li>
<li>Tack: UbiComp/ISWC 2025 13th Workshop on HASCA</li>
<li>"Go" button</li>
</ol>
				

				
			
				
				
				<h2 id="h718">IMPORTANT DATES </h2>
				

				
			
				
				
				<p >HASCA session papers:<br />
* For SHL Challenge and WEAR challenge, please check each challenge's conditions as dates may differ from HASCA paper.<br />
* for submission after 23rd, please follow the aanouncement in the submission site.<br />
<font color="red">* (update at June 8th) submissions are now open for papers rejected at ISWC. Please submit your paper, ISWC review results, and the letter explaining the revision policy. Submission Deadline: July 13 (AoE)</font></p>
				

				
			
				
				
				<ul >
<li>Submission Deadline: June <s>15 22 (ext.)</s><b>29 (ext.)</b><br></li>
<li>Acceptance Notification: July 16<br></li>
<li>Camera-ready: July 31 <strong>HARD</strong><br></li>
<li>Workshop: Oct. 12 or 13, <br></li>
</ul>
				

				
			
				
				
				<h2 id="h721">SPECIAL SESSION</h2>
				

				
			
				
				
				<p >This year, the following challenges are held with HASCA.<br />
<br />
Sussex-Huawei Locomotion (SHL) Challenge<br />
<a href="http://www.shl-dataset.org/challenges/" target="_blank">http://www.shl-dataset.org/challenges/</a><br />
<br />
WEAR Dataset Challenge<br />
<a href="https://mariusbock.github.io/wear/challenge.html" target="_blank">https://mariusbock.github.io/wear/challenge.html</a></p>
				

				
			
				
				
				<h2 id="h723">CONTACT<br />
hasca-organizer[at]ml.hasc.jp</h2>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>cfp</category>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/cfp/entry-64.html</guid>
			<pubDate>Thu, 01 May 2025 16:18:51 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Program</title>
			<link>http://hasca2025.hasc.jp/program/entry-62.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p >HASCA Workshop will take place on Saturday, 12th Oct. at Room U3.<br />
<br />
Presentation time:<br />
HASCA oral presentation - 15 min (12-min talk + 3-min Q&A)<br />
Other presentation - follow the timetable<br />
<br />
(note at Oct. 9: Timetable has been slightly modified to follow the official ubicomp timetable. According to that, FedFitTech... has been moved to 1st session from 4th.)</p>
				

				
			
				
				
				<table>
<tr>
	<td>08:00-09:00</td>
	<td>
		Registration<br>
	</td>
</tr>
<tr>
	<td>09:00-10:30</td>
	<td>
		Session 1: HASCA paper session 1 [90 min] (Chair: Kazuya Murao)<br>
		<ul>
			<li><em>Opening talk (10 min)</em></li>
			<li><em>Where Are the Best Positions of IMUs for HAR?- Investigation with four DNN models of different characteristics (15 min)</em><br>
			Yu Enokibori, Takahiro Sato, Kenji Mase (Nagoya University)</li>
			<li><em>Smartphone-Based Activity Recognition in a Logistics Warehouse Using Self-supervised Representation Learning (15 min)</em><br>
			Kisho Watanabe, Kazuma Kano, Tahera Hossain, Shin Katayama, Kenta Urano, Takuro Yonezawa, Nobuo Kawaguchi (Nagoya University)</li>
			<li><em>Identifying Routine from Sequences of Activities of Daily Living in Smart-homes (15 min)</em><br>
			Sayeda Shamma Alia, Paula Lago (Concordia University)</li>
			<li><em>One-Class Classifier-based Incremental Learning Method to Personalize Multi-Class Human Activity Recognition Models from Streaming Data (15 min)</em><br>
			Pekka Siirtola (University of Oulu)</li>
			<li><em>FedFitTech: A Baseline in Federated Learning for Fitness Tracking (15 min)</em><br>
			Zeyneddin Oz, Shreyas Korde, Marius Bock, Kristof Van Laerhoven (University of Siegen)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>10:30-11:00</td>
	<td>
		Coffee Break
	</td>
</tr>
<tr>
	<td>11:00-12:30</td>
	<td>
		Session 2: WEAR challenge session [90 min] (Chair: Marius Bock)<br>
		<ul>
			<li><em>Opening Talk (10 min)</em></li>
			<li><em>Winning Solutions (15 min each)</em>
				<p>Note that the order does not reflect final ranking. The result will be disclosed at the conference.</p>
				<ul>
					<li><em>FAME: Feature-Augmented Multi-View Ensemble Framework for Human Activity Recognition using Inertial Sensors</em><br>
					Francisco Calatrava (Örebro University), Lala Shakti Swarup Ray, Vitor Fortes Rey, Paul Lukowicz (DFKI), Oscar Mozos (Universidad Politécnica de Madrid)</li>
					<li><em>Challenging High-Performance Human Activity Recognition with a State-of-the-art Model and Simple Preprocessing</em><br>
					Atsuya Sumitou, Yu Enokibori (Nagoya University)</li>
					<li><em>Mitigating Null-Class Dominance in Multiclass Inertial-Based Activity Recognition</em><br>
					Ricarda Link, Heiner Stuckenschmidt (University of Mannheim)</li>
				</ul>
			</li>
			<li><em>Award Ceremony (10 min)</em></li>
			<li><em>Poster Sessions (25 min)</em></li>
		</ul>
	</td>
</tr>
<tr>
	<td>12:30-14:30</td>
	<td>
		Lunch Break
	</td>
</tr>
<tr>
	<td>14:30-16:00</td>
	<td>
		Session 3: SHL challenge session [90min] (Chair: Mathias Ciliberto)<br>
		<ul>
			<li><em>Opening Remarks (5 min)</em></li>
			<li>(Summary Task 1)<em>Summary of SHL Challenge 2025: Locomotion and Transportation Mode Recognition Using Foundation Models (12 min)</em><br>
			Lin Wang (Queen Mary University of London), Mathias Ciliberto (University of Cambridge), Hristijan Gjoreski (University in Skopje), Paula Lago (Concordia University), Kazuya Murao (Ritsumeikan University), Tsuyoshi Okita (Kyushu Institute of Technology), Daniel Roggen (University of Sussex)</li>
			<li>(Summary Task 2)<em>Foundation Models to Tackle Activity Recognition in Unknown Domain:  Sussex-Huawei Locomotion Challenge 2025 Task 2 (12 min)</em><br>
			Tsuyoshi Okita, Kosuke Ukita, Asahi Miyazaki, Daichi Kubota, Jukichi Ota, Naoki Kagiyama, Asahi Nishikawa, Daichi Nagayasu, Syunya Tomitaka, Daisuke Nozaki, Yuki Odo, Raku Yamashita, Xiaolong Ye, Huayu Gao, Kazuki Okahashi, Koki Matsuishi, Masaharu Kagiyama, Kodai Hirata, Haruki Kai (Kyushu Institute of Technology), Lin Wang (Queen Mary University of London), Hristijan Gjoreski (University in Skopje), Mathias Ciliberto (University of Cambridge), Paula Lago (Concordia University), Kazuya Murao (Ritsumeikan University), Daniel Roggen (University of Sussex)</li>
			<li>(Oral Presentation Task 2)<em>Robust Sensor-Based Activity Recognition under Domain Shift via Fine-Tuning the Time-Series Foundation Model (12 min)</em><br>
			Ryoichi Sekiguchi, Hiroshi Minowa, Masaki Kawakatsu (Tokyo Denki University)</li>
			<li><em>Video show - Task 1 (8 min)</em></li>
			<li>(Oral Presentation Task 1)<em>Revisiting Foundation Models for Human Activity Recognition: Multiresolution Sensor Fusion with TimesFM (12 min)</em><br>
			Takumi Hyugaji, Itsuki Theo Terashita, Masaki Kawakatsu (Tokyo Denki University)</li>
			<li>(Oral Presentation Task 1)<em>Ensemble of Foundation Models for Sensor-Based Locomotion and Transportation Mode Recognition (12 min)</em><br>
			Mohammad Foad Abdi, Yousef Alikhani, Mohammad Mahdi Azizi, Mohammad Saleh Azizikia, Bagher BabaAli, Mohammad Mahdi Mohebbizadeh, Arash Nasr Esfahani</li>
			<li>(Oral Presentation Task 1)<em>IMU2IMG: IMU in the Language of Vision Foundation Models (12 min)</em><br>
			Sunkyung Lee, Hyuntae Jeong, Seungeun Chung, Kyoung Ju Noh, Jeong Mook Lim, Gyuwon Jung, Se Won Oh</li>
			<li><em>Ceremony (5 min)</em></li>
		</ul>
	</td>
</tr>
<tr>
	<td>16:00-16:30</td>
	<td>
		Coffee Break
	</td>
</tr>
<tr>
	<td>16:30-17:45</td>
	<td>
		Session 4: HASCA paper session [75min] (Chair: Pekka Siirtola)<br>
		<ul>
			<li><em>Mouth Gesture Recognition Using PPG Sensors in Earbuds (15 min)</em><br>
			Taiki Yuma, Kazuya Murao (Ritsumeikan University)</li>
			<li><em>Evaluating Rhythmic Representations in Mental Health from Wearable Devices Using the GLOBEM Datasets (15 min)</em><br>
			Melika Mirzaseyedi, Abdelwahab Hamou-lhadj, Paula Lago (Concordia University)</li>
			<li><em>Fingerprint Spoof Detection during Fingerprint Authentication Using Active Acoustic Sensing (15 min)</em><br>
			Koki Okeda, Kazuya Murao (Ritsumeikan University)</li>
			<li><em>Controlling the Influence of Ranking Information on Preference Judgments by Information Presentation Across Perceptual Channels (15 min)</em><br>
			Sho Nakazawa, Kyosuke Futami, Kazuya Murao (Ritsumeikan University)</li>
			<li><em>Silent Speech-Based Personal Authentication Using a Mask-Type Device with Infrared Sensors (15 min)</em><br>
			Takumi Sakamoto, Kyosuke Futami, Kazuya Murao (Ritsumeikan University)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>17:45-</td>
	<td>
		Closing
	</td>
</tr>
</table>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>program</category>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/program/entry-62.html</guid>
			<pubDate>Thu, 01 May 2025 16:18:43 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Welcome to HASCA2025</title>
			<link>http://hasca2025.hasc.jp/index.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<h2 id="h670">Welcome to HASCA2025 Web site!</h2>
				

				
			
				
				
				<p>HASCA2025 is an 13th International Workshop on Human Activity Sensing Corpus and Applications. The workshop will be held in conjunction with <a href="https://www.ubicomp.org/ubicomp-iswc-2025/" targe="_blank">UbiComp/ISWC2025</a>.</p>

<p><strong>Important Dates</strong><br>
Submission Deadline: June <s>15 22 (ext.)</s><b>29 (ext.)</b><br>
* for submission after 23rd, please follow the aanouncement in the submission site.
Acceptance Notification: July 16<br>
<font color="red">* (update at June 8th) submissions are now open for papers rejected at ISWC. Please submit your paper, ISWC review results, and the letter explaining the revision policy.
Submission Deadline: July 13 (AoE)</font><br>
Camera-ready: July 31 <strong>HARD!!</strong><br>
Workshop: Oct. 12 (Room U3), <br></p>

<p>For SHL Challenge and WEAR challenge, please check each challenge's conditions as dates may differ.</p>

				

				
			
				
				
				<h2 id="h672">Challenges</h2>
				

				
			
				
				
				<p >Following challenges are held with HASCA 2025.<br />
Please refer to each challenge website for details including rules and deadlines.<br />
<br />
Sussex-Huawei Locomotion (SHL) Challenge<br />
<a href="http://www.shl-dataset.org/challenges/" target="_blank">http://www.shl-dataset.org/challenges/</a><br />
<br />
WEAR Dataset Challenge<br />
<a href="https://mariusbock.github.io/wear/challenge.html" target="_blank">https://mariusbock.github.io/wear/challenge.html</a></p>
				

				
			
				
				
				<h2 id="h674">Abstract</h2>
				

				
			
				
				
				<p>The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpora and improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.</p>

<p>The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):</p>

				

				
			
				
				
				<h2 id="h676">Data collection / Corpus construction</h2>
				

				
			
				
				
				<p>Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.</p>

				

				
			
				
				
				<h2 id="h678">Effectiveness of Data / Data Centric Research</h2>
				

				
			
				
				
				<p>There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.</p>

				

				
			
				
				
				<h2 id="h680">Tools and Algorithms for Activity Recognition</h2>
				

				
			
				
				
				<p>If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.</p>

				

				
			
				
				
				<h2 id="h682">Real World Application and Experiences</h2>
				

				
			
				
				
				<p>Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.</p>

				

				
			
				
				
				<h2 id="h684">Sensing Devices and Systems</h2>
				

				
			
				
				
				<p>Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.</p>

				

				
			
				
				
				<h2 id="h686">Mobile experience sampling, experience sampling strategies: </h2>
				

				
			
				
				
				<p >Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.</p>
				

				
			
				
				
				<h2 id="h688">Unsupervised pattern discovery</h2>
				

				
			
				
				
				<p >Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.</p>
				

				
			
				
				
				<h2 id="h690">Dataset acquisition and annotation through crowd-sourcing, web-mining</h2>
				

				
			
				
				
				<p >A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.</p>
				

				
			
				
				
				<h2 id="h692">Transfer learning, semi-supervised learning, lifelong learning</h2>
				

				
			
				
				
				<p >The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.</p>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<guid isPermaLink="true">http://hasca2025.hasc.jp/index.html</guid>
			<pubDate>Thu, 01 May 2025 16:18:37 +0900</pubDate>
		</item>
		<item>
			<dc:creator>hasca-web</dc:creator>
			<title>Award</title>
			<link>http://hasca2024.hasc.jp/entry-59.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p>Thank you for attending HASCA2024!<br />
This year's HASCA successfully finished.</p>

<p>We had great presentations and papers.<br />
Among them, through participants' voting, we give following awards:</p>

<ul>
<li>Best Paper</li>
<li>Best Presentation</li>
</ul>

<p>This year, both best paper and best peresentation awards go to:<br />
<strong>PrISM: Unified Framework for Task Assistants powered by Multimodal Human Activity Recognition</strong><br />
<em>Riku Arakawa (Carnegie Mellon University), Mayank Goel (Carnegie Mellon University)</em></p>

				

				
			

				
				<div class="column-image-left">
					<a href="http://hasca2015.hasc.jp/archives/012/202410/large-670e462153e6a.jpg" rel="prettyPhoto[59]">
						<img class="columnImage" src="http://hasca2015.hasc.jp/archives/012/202410/670e462153e6a.jpg" alt="" width="640" height="495" />
					</a>
				</div>

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<guid isPermaLink="true">http://hasca2024.hasc.jp/entry-59.html</guid>
			<pubDate>Tue, 15 Oct 2024 19:38:54 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Call for Contributions</title>
			<link>http://hasca2024.hasc.jp/cfp/entry-58.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p >We are pleased to announce that the HASCA (Human Activity Sensing Corpus and Applications) Workshop will take place as part <a href="https://www.ubicomp.org/ubicomp-iswc-2024/" target="_blank">Ubicomp2024</a>.<br />
HASCA is one of the largest workshops in Ubicomp, it has been held over 12 years.</p>
				

				
			
				
				
				<h2 id="h646">Dates</h2>
				

				
			
				
				
				<p >Submission Deadline: <s>Jun. 7, 2024</s> Jun. 14, 2024 (extended)<br />
Acceptance Notification: Jul. 5, 2024<br />
Camera-ready: Jul. 19, 2024<br />
Workshop: Oct. 5th, 2024</p>
				

				
			
				
				
				<h2 id="h648">SUMMARY</h2>
				

				
			
				
				
				<p >The objective of this workshop is to share the experiences among<br />
researchers about current challenges of real-world activity<br />
recognition with newly developed datasets and tools, breaking through<br />
towards open-ended contextual intelligence.<br />
<br />
This workshop discusses the challenges of designing reproducible<br />
experimental setups, the large-scale dataset collection campaigns, the<br />
activity and context recognition methods that are robust and adaptive,<br />
and evaluation systems in the real world.<br />
<br />
As a special topic of this year we will reflect on the challenges to<br />
recognize situations, events and/or activities among the statically<br />
predefined pools and beyond - which is the current state of the art -<br />
and instead we will adopt an "open-ended view" on activity and context<br />
awareness. This may result in combinations of the automatic discovery<br />
of relevant patterns in sensor data, the experience sampling and<br />
wearable technologies to unobtrusively discover the semantic meaning<br />
of such patterns, the crowd-sourcing of dataset acquisition and<br />
annotation, and new "open-ended" human activity modeling techniques.</p>
				

				
			
				
				
				<h2 id="h650">CALL FOR CONTRIBUTIONS</h2>
				

				
			
				
				
				<p ><strong>- *Data collection*, *Corpus construction*.</strong><br />
Experiences or reports from data collection and/or corpus construction<br />
projects, including papers which describes the formats, styles and/or<br />
methodologies for data collection. Cloud-sourcing data collection and<br />
participatory sensing also could be included in this topic.<br />
<br />
<strong>- *Effectiveness of Data*, *Data Centric Research*.</strong><br />
There is a field of research based on the collected corpora, which is<br />
so called "data centric research". Also, we call for the experience of<br />
using large-scale human activity sensing corpora. Using large-scale<br />
corpora with an analysis by machine learning, there will be a large<br />
space for improving the performance of recognition results.<br />
<br />
<strong>- *Tools and Algorithms for Activity Recognition*.</strong><br />
If we have appropriate tools for the management of sensor data,<br />
activity recognition researchers could have more focused on their<br />
actual research theme. This is because the developed tools and<br />
algorithms are often not shared among the research community. In this<br />
workshop, we solicit reports on developed tools and algorithms for<br />
forwarding to the community.<br />
<br />
<strong>- *Real World Application and Experiences*.</strong><br />
Activity recognition "in the lab" usually works well. However, it does<br />
not scale well with real world data. In this workshop, we also solicit<br />
the experiences from real world applications. There is a huge gap<br />
between "lab" and "real world” environments . Large-scale human<br />
activity sensing corpora will help to overcome this gap.<br />
<br />
<strong>- *Sensing Devices and Systems*.</strong><br />
Data collection is not only performed by the "off-the-shelf" sensors<br />
but also the newly developed sensors which supply information which<br />
has not been investigated. There is also a research area about the<br />
development of new platform for data collection or the evaluation<br />
tools for collected data.<br />
<br />
In light of this year's special emphasis on open-ended contextual<br />
awareness, we wish cover these topics as well:<br />
<br />
<strong>- *Mobile Experience Sampling*, *Experience Sampling Strategies*.</strong><br />
Advances in experience sampling approaches, for instance intelligent<br />
user query or those using novel devices (e.g. smartwatches), are<br />
likely to play an important role to provide user-contributed<br />
annotations of their own activities.<br />
<br />
<strong>- *Unsupervised Pattern Discovery*.</strong><br />
Discovering meaningful patterns in sensor data in an unsupervised<br />
manner can be needed in the context of informing other elements of the<br />
system by inquiring the user and by triggering the annotation with<br />
crowd-sourcing.<br />
<br />
<strong>- *Dataset Acquisition and Annotation*, *Crowd-Sourcing*, *Web-Mining*.</strong><br />
A wide abundance of sensor data is potentially within the reach of<br />
users instrumented with their mobile phones and other<br />
wearables. Capitalizing on crowd-sourcing to create larger datasets in<br />
a cost effective manner may be critical to open-ended activity<br />
recognition. Many online datasets are also available and could be used<br />
to bootstrap recognition models.<br />
<br />
<strong>- *Transfer Learning*, *Semi-Supervised Learning*, *Lifelog Learning*.</strong><br />
The ability to translate recognition models across modalities or to<br />
use minimal forms of supervision would allow to reuse datasets in a<br />
wider range of domains and reduce the costs of acquiring annotations.<br />
<br />
<strong>- *Deep Learning*.</strong><br />
Together with the big success of deep learning in other AI domain, deep<br />
learning models are gradually playing an important role in activity<br />
recognition as well.</p>
				

				
			
				
				
				<h2 id="h652">AREAS OF INTEREST</h2>
				

				
			
				
				
				<ul >
<li>Human Activity Sensing Corpus</li>
<li>Large Scale Data Collection</li>
<li>Data Validation</li>
<li>Data Tagging / Labeling</li>
<li>Efficient Data Collection</li>
<li>Data Mining from Corpus</li>
<li>Automatic Segmentation</li>
<li>Performance Evaluation</li>
<li>Man-machine Interaction</li>
<li>Noise Robustness</li>
<li>Non Supervised Machine Learning</li>
<li>Sensor Data Fusion</li>
<li>Tools for Human Activity Corpus/Sensing</li>
<li>Participatory Sensing</li>
<li>Feature Extraction and Selection</li>
<li>Context Awareness</li>
<li>Pedestrian Navigation</li>
<li>Social Activities Analysis/Detection</li>
<li>Compressive Sensing</li>
<li>Sensing Devices</li>
<li>Lifelog Systems</li>
<li>Route Recognition/Detection</li>
<li>Wearable Application</li>
<li>Gait Analysis</li>
<li>Health-care Monitoring/Recommendation</li>
<li>Daily-life Worker Support</li>
<li>Deep Learning</li>
</ul>
				

				
			
				
				
				<h2 id="h654">FORMAT & TEMPLATE</h2>
				

				
			
				
				
				<p ><b>The paper must be in 6 pages including references in the 2-column format.</b><br />
<br />
ACM requires UbiComp/ISWC 2024 workshop submissions to use the double-column template. Please note that the template for submission is double-column format and the template for publication (camera-ready) is in single-column.<br />
Please carefully read <a href="https://www.ubicomp.org/ubicomp-iswc-2024/authors/formatting/" target="_blank">Ubicomp website about the template</a>.<br />
<br />
<b>Submissions do not need to be anonymous</b>.<br />
All publications will be peer reviewed together with their contribution to the topic of the workshop.<br />
The accepted papers will be published in the UbiComp/ISWC 2024 adjunct proceedings, which will be included in the ACM Digital Library.<br />
</p>
				

				
			
				
				
				<h2 id="h656">SUBMISSION</h2>
				

				
			
				
				
				<p >Please submit your papers from <a href="https://new.precisionconference.com/submissions" target="_blank" rel="noopener noreferrer">https://new.precisionconference.com/submissions</a><br />
Make a new submission as follows:</p>
				

				
			
				
				
				<ol >
<li>Society: SIGCHI</li>
<li>Conference/Journal: UbiComp/ISWC 2024</li>
<li>Tack: UbiComp/ISWC 2024 12th Workshop on HASCA</li>
<li>"Go" button</li>
</ol>
				

				
			
				
				
				<h2 id="h658">IMPORTANT DATES </h2>
				

				
			
				
				
				<p >Full research/short technical papers:</p>
				

				
			
				
				
				<ul >
<li>Submission Deadline: <s>Jun. 7, 2024</s> Jun. 14, 2024 (extended)</li>
<li>Acceptance Notification: Jul. 5, 2024</li>
<li>Camera-ready: Jul. 19, 2024</li>
<li>Workshop: Oct. 5 2024</li>
</ul>
				

				
			
				
				
				<h2 id="h661">SPECIAL SESSION</h2>
				

				
			
				
				
				<p >This year, the following challenges are held with HASCA.<br />
<br />
Sussex-Huawei Locomotion Challenge 2024<br />
<a href="http://www.shl-dataset.org/activity-recognition-challenge-2024/" target="_blank">http://www.shl-dataset.org/activity-recognition-challenge-2024/</a><br />
<br />
WEAR Dataset Challenge<br />
<a href="https://mariusbock.github.io/wear/challenge.html" target="_blank">https://mariusbock.github.io/wear/challenge.html</a></p>
				

				
			
				
				
				<h2 id="h663">CONTACT<br />
hasca-organizer[at]ml.hasc.jp</h2>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>cfp</category>
			<guid isPermaLink="true">http://hasca2024.hasc.jp/cfp/entry-58.html</guid>
			<pubDate>Thu, 25 Apr 2024 16:27:16 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Program</title>
			<link>http://hasca2024.hasc.jp/program/entry-57.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p >We're sorry for delay on opening this program.<br />
<br />
HASCA Workshop will take place on Saturday, 5th Oct. at Victoria Suite (Room 2).<br />
<br />
Presentation time:<br />
HASCA oral presentation - 15 min (10-min talk + 5-min Q&A)<br />
Other presentation - follow the timetable<br />
</p>
				

				
			
				
				
				<table>
<tr>
	<td>08:00-09:00</td>
	<td>
		Registration<br>
	</td>
</tr>
<tr>
	<td>09:00-10:30</td>
	<td>
		Session 1: HASCA session[90min] (Chair: Tsuyoshi Okita)<br>
		<ul>
			<li><em>Large Language Models for Generating Semantic Nursing Activity Logs: Exploiting Temporal and Contextual Information</em><br>
			Nazmun Nahid (Kyushu Institute of Technology), Ryuya Munemoto (Kyushu Institute of Technology), Sozo Inoue (Kyushu Institute of Technology)</li>
			<li><em>Synthetic Skeleton Data Generation using Large Language Model for Nurse Activity Recognition</em><br>
			Umang Dobhal (Dronacharya Collge of Engineering), Christina Alvarez Garcia (Kyushu Institute of Technology), Sozo Inoue (Kyushu Institute of Technology)</li>
			<li><em>Initial Investigation of Kolmogorov-Arnold Networks (KANs) as Feature Extractors for IMU Based Human Activity Recognition</em><br>
			Mengxi Liu (German Research Center for Artificial Intelligence), Daniel Gei&#223;ler (German Research Center for Artificial Intelligence), Dominique Nshimyimana (DFKI), Sizhen Bian (ETH Z&#252;rich), Bo Zhou (German Research Center for Artificial Intelligence), Paul Lukowicz (DFKI)</li>
			<li><em>PrISM: Unified Framework for Task Assistants powered by Multimodal Human Activity Recognition</em><br>
			Riku Arakawa (Carnegie Mellon University), Mayank Goel (Carnegie Mellon University)</li>
			<li><em>DNN Model Comparison for Sensor Location Robustness</em><br>
			Yu Enokibori (Nagoya University), Takahiro Saito (Nagoya University), Kenji Mase (Nagoya University)</li>
			<li><em>Emotion Recognition on the Go: Utilizing Wearable IMUs for Personalized Emotion Recognition</em><br>
			Zikang Leng (Georgia Institute of Technology), Myeongul Jung (Hanyang University), Sungjin Hwang (Hanyang University), Seungwoo Oh (Hanyang University), Lizhe Zhang (Georgia Institute of Technology), Thomas Ploetz (Georgia Institute of Technology), Kwanguk Kim (Hanyang University)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>10:30-11:00</td>
	<td>
		Coffee Break
	</td>
</tr>
<tr>
	<td>11:00-12:30</td>
	<td>
		Session 2: HASCA session[30min] + WEAR sesion[60min] (Chair: Marius Bock)<br>
		HASCA
		<ul>
			<li><em>Diffusion Model-based Classifier for Human Activity Recognition</em><br>
			Kosuke Ukita (Kyushu Institute of Technology), Tsuyoshi Okita (Kyushu Institute of Technology)</li>
			<li><em>Game of LLMs: Discovering Structural Constructs in Activities using Large Language Models</em><br>
			Shruthi Kashinath Hiremath (Georgia Institute of Technology), Thomas Ploetz (Georgia Institute of Technology), </li>
		</ul>
		WEAR
		<ul>
			<li><em>Introduction and Challenge Overview</em><br>
			Marius Bock</li>
			<li><em>TA-DA! - Improving Activity Recognition using Temporal Adapters and Data Augmentation</em><br>
			Maximilian Hopp (University of Siegen), Helge Hartleb (University of Siegen), Robin Burchard (University of Siegen)</li>
			<li><em>Left-Right Swapping and Upper-Lower Limb Pairing for Robust Multi-Wearable Workout Activity Detection</em><br>
			Jonas Van Der Donckt (Ghent University), Jeroen Van Der Donckt (Ghent University - imec), Sofie Van Hoecke (Ghent University - imec)</li>
			<li><em>Augmentation Approaches to Refine Wearable Human Activity Recognition</em><br>
			Somesh Salunkhe (University of Siegen), Shubham Pradeep Shinde (University of Siegen), Pradnyesh Patil (University of Siegen), Robin Burchard (University of Siegen)</li>
			<li><em>Results & Winners Ceremony</em><br></li>
		</ul>
	</td>
</tr>
<tr>
	<td>12:30-14:00</td>
	<td>
		Lunch Break
	</td>
</tr>
<tr>
	<td>14:00-15:30</td>
	<td>
		Session 3: SHL session[90min] (Chair: Mathias Ciliberto, Kazuya Murao, Lin Wang)<br>
		<ul>
			<li>Summary talk [15 min]</li>
			<li>Paper 1 [12 min]</li>
			<li>Paper 2 [12 min]</li>
			<li>Paper 3 [12 min]</li>
			<li>Ceremony [5 min]</li>
			<li>Poster [10 min]</li>
		</ul>
	</td>
</tr>
<tr>
	<td>15:30-16:00</td>
	<td>
		Coffee Break with SHL poster (cont'd)
	</td>
</tr>
<tr>
	<td>16:00-17:30</td>
	<td>
		Session 4: HASCA session[90min] (Chair: Yu Enokibori)<br>
		<ul>
			<li><em>Water Level Recognition by Analyzing the Sound when Pouring Water</em><br>
			Atsuhiro Fujii (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)</li>
			<li><em>A System to Visualize Differences in Paddling Timing between Teammates in Rowing</em><br>
			Daiki Takahashi (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)</li>
			<li><em>A Monocular Fisheye Video-Based 2D to 3D Pose Lift Technique with Multiperson Spatial Context Integration</em><br>
			Iqbal Hassan (Kyushu Institute of Technology), Nazmun Nahid (Kyushu Institute of Technology), Sozo Inoue (Kyushu Institute of Technology)</li>
			<li><em>Composite Image Generation Using Labeled Segments for Pattern-Rich Dataset without Unannotated Target</em><br>
			Kazuma Kano (Nagoya University), Yuki Mori (Nagoya University), Keisuke Higashiura (Nagoya University), Tahera Hossain (Nagoya University), Shin Katayama (Nagoya University), Kenta Urano (Nagoya University), Takuro Yonezawa (Nagoya University), Nobuo Kawaguchi (Nagoya University)</li>
			<li><em>User Authentication Method for Smart Glasses using Gaze Information of Registered Known Images and AI-generated Unknown Images</em><br>
			Masaya Inoue (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)</li>
			<li><em>Face Recognition Reinforcement using Pulse Waves of Front Camera Face Image and Rear Camera Finger Image on a Smartphone</em><br>
			Taiki Yuma (Ritsumeikan University), Kazuya Murao (Ritsumeikan University)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>17:30-</td>
	<td>
		Closing
	</td>
</tr>
</table>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>program</category>
			<guid isPermaLink="true">http://hasca2024.hasc.jp/program/entry-57.html</guid>
			<pubDate>Thu, 25 Apr 2024 16:27:06 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Welcome to HASCA2024</title>
			<link>http://hasca2024.hasc.jp/entry-55.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<h2 id="h615">Welcome to HASCA2024 Web site!</h2>
				

				
			
				
				
				<p>HASCA2024 is an 12th International Workshop on Human Activity Sensing Corpus and Applications. The workshop will be held in conjunction with <a href="https://www.ubicomp.org/ubicomp-iswc-2024/" targe="_blank">UbiComp/ISWC2024</a>.</p>

<p><strong>Important Dates</strong><br>
Submission Deadline: <s>Jun. 7</s> Jun. 14 (extended)<br>
Acceptance Notification: Jul. 5<br>
Camera-ready: Jul. 19<br>
Workshop: Oct. 5, 2024 at Melbourne, Australia<br></p>

				

				
			
				
				
				<h2 id="h665">Challenges</h2>
				

				
			
				
				
				<p >Following challenges are held with HASCA 2024.<br />
Please refer to each challenge website for details including rules and deadlines.<br />
<br />
Sussex-Huawei Locomotion Challenge 2024<br />
<a href="http://www.shl-dataset.org/activity-recognition-challenge-2024/" target="_blank">http://www.shl-dataset.org/activity-recognition-challenge-2024/</a><br />
<br />
WEAR Dataset Challenge<br />
<a href="https://mariusbock.github.io/wear/challenge.html" target="_blank">https://mariusbock.github.io/wear/challenge.html</a></p>
				

				
			
				
				
				<h2 id="h617">Abstract</h2>
				

				
			
				
				
				<p>The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpora and improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.</p>

<p>The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):</p>

				

				
			
				
				
				<h2 id="h619">Data collection / Corpus construction</h2>
				

				
			
				
				
				<p>Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.</p>

				

				
			
				
				
				<h2 id="h621">Effectiveness of Data / Data Centric Research</h2>
				

				
			
				
				
				<p>There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.</p>

				

				
			
				
				
				<h2 id="h623">Tools and Algorithms for Activity Recognition</h2>
				

				
			
				
				
				<p>If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.</p>

				

				
			
				
				
				<h2 id="h625">Real World Application and Experiences</h2>
				

				
			
				
				
				<p>Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.</p>

				

				
			
				
				
				<h2 id="h627">Sensing Devices and Systems</h2>
				

				
			
				
				
				<p>Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.</p>

				

				
			
				
				
				<h2 id="h629">Mobile experience sampling, experience sampling strategies: </h2>
				

				
			
				
				
				<p >Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.</p>
				

				
			
				
				
				<h2 id="h631">Unsupervised pattern discovery</h2>
				

				
			
				
				
				<p >Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.</p>
				

				
			
				
				
				<h2 id="h633">Dataset acquisition and annotation through crowd-sourcing, web-mining</h2>
				

				
			
				
				
				<p >A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.</p>
				

				
			
				
				
				<h2 id="h635">Transfer learning, semi-supervised learning, lifelong learning</h2>
				

				
			
				
				
				<p >The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.</p>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<guid isPermaLink="true">http://hasca2024.hasc.jp/entry-55.html</guid>
			<pubDate>Thu, 25 Apr 2024 16:26:21 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Program</title>
			<link>http://hasca2023.hasc.jp/program/entry-52.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p >Presentation time:<br />
HASCA oral presentation, 20 min (approx. 15-min talk + 5-min Q&A)<br />
</p>
				

				
			
				
				
				<table>
<tr>
	<td>09:00-09:10</td>
	<td>
		Opening (Chair: Kazuya Murao)<br>
	</td>
</tr>
<tr>
	<td>09:10-10:30</td>
	<td>
		Session 1 [20min x4] (Chair: Kazuya Murao)<br>
		<ul>
			<li><em>Towards LLMs for Sensor Data: Multi-Task Self-Supervised Learning</em><br>
			Tsuyoshi Okita(Kyushu Institute of Technology), Kosuke Ukita(Kyushu Institute of Technology), Koki Matsuishi(Kyushu Institute of Technology), Masaharu Kagiyama(Kyushu Institute of Technology), Kodai Hirata(Kyushu Institute of Technology), Asahi Miyazaki(Kyushu Institute of Technology)</li>
			<li><em>Predicting and Analyzing Emotion of Elderly People in Care Facilities</em><br>
			Xinyi Min(Kyushu Institute of Technology), Haru Kaneko(Kyushu Institute of Technology), Sozo Inoue(Kyushu Institute of Technology</li>
			<li><em>Personalized federated human activity recognition through semi-supervised learning and enhanced representation</em><br>
			Lulu Gao(Kyushu University), Shin'ichi Konomi(Kyushu University)</li>
			<li><em>Investigating the Effect of Orientation Variability in Deep Learning-based Human Activity Recognition</em><br>
			Azhar Ali Khaked(Concordia University), Nobuyuki Oishi(University of Sussex), Daniel Roggen(University of Sussex), Paula Lago(Concordia University)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>10:30-11:00</td>
	<td>
		Coffee Break
	</td>
</tr>
<tr>
	<td>11:00-12:20</td>
	<td>
		Session 2 [20min x4] (Chair: Paula Lago)<br>
		<ul>
			<li><em>Cardiac massage practice application using barometer in a smart phone and sealed bag</em><br>
			Soto Mizukusa(Aichi Institute of Technology), Katsuhiko Kaji(Aichi Institute of Technology)</li>
			<li><em>Eye movement differences in Japanese text reading between cognitively healthy older and younger adults</em><br>
			Jumpei Kobayashi(Dai Nippon Printing Co., Ltd.), Hiroyuki Suzuki(Tokyo Metropolitan Institute for Geriatrics and Gerontology),  Kenichiro Sato(Tokyo Metropolitan Institute for Geriatrics and Gerontology), Susumu Ogawa(Tokyo Metropolitan Institute for Geriatrics and Gerontology), Hiroko Matsunaga(Tokyo Metropolitan Institute for Geriatrics and Gerontology), Toshio Kawashima( Future University Hakodate)</li>
			<li><em>A Data-Driven Study on the Hawthorne Effect in Sensor-Based Human Activity Recognition</em><br>
			Alexander Hoelzemann(University of Siegen), Marius Bock(University of Siegen), Ericka Andrea Valladares Bastias(University of Siegen), Salma El Ouazzani Touhami(University of Siegen), Kenza Nassiri(University of Siegen), Kristof Van Laerhoven(University of Siegen)</li>
			<li><em>Eco-Friendly Sensing for Human Activity Recognition</em><br>
			Kaede Shintani(Osaka University), Hamada Rizk(Osaka University), Hirozumi Yamaguchi(Osaka University)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>12:30-14:00</td>
	<td>
		Lunch Break
	</td>
</tr>
<tr>
	<td>14:00-15:30</td>
	<td>
		Session 3 [SHL session]<br>
		<ul>
			<li>SHL intro [4 min]</li>
			<li>SHL summary paper [15 min]</li>
			<li>SHL top 3 papers [36 min]</li>
			<li>SHL award ceremony [5 min]</li>
			<li>SHL posters session[18 min]</li>
		</ul>
	</td>
</tr>
<tr>
	<td>15:30-16:00</td>
	<td>
		Coffee Break with SHL poster (cont'd)
	</td>
</tr>
<tr>
	<td>16:00-17:00</td>
	<td>
		Session 4 [20min x3] (Chair: Yu Enokibori)<br>
		<ul>
			<li><em>Where Are the Best Positions of IMU Sensors for HAR? - Approach by a Garment Device with Fine-Grained Grid IMUs -</em><br>
			Akihisa Tsukamoto(Nagoya University), Naoto Yoshida(Kogakuin University), Tomoko Yonezawa(Kansai University), Kenji Mase(Nagoya University), Yu Enokibori(Nagoya University)</li>
			<li><em>Toward Pioneering Sensors and Features Using Large Language Models in Human Activity Recognition</em><br>
			Haru Kaneko(Kyushu Institute of Technology), Sozo Inoue(Kyushu Institute of Technology)</li>
			<li><em>Human activity recognition for packing processes using CNN-biLSTM</em><br>
			Alberto Angulo(Sonora Institute of Technology), Jessica Beltran(Universidad Autonoma de Coahuila), Luis A. Castro(Sonora Institute of Technology)</li>
		</ul>
	</td>
</tr>
<tr>
	<td>17:00-17:10</td>
	<td>
		Closing
	</td>
</tr>
</table>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>program</category>
			<guid isPermaLink="true">http://hasca2023.hasc.jp/program/entry-52.html</guid>
			<pubDate>Mon, 25 Sep 2023 12:00:00 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Welcome to HASCA2023</title>
			<link>http://hasca2023.hasc.jp/entry-54.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<h2 id="h593">Welcome to HASCA2023 Web site!</h2>
				

				
			
				
				
				<p>HASCA2023 is an eleventh International Workshop on Human Activity Sensing Corpus and Applications. The workshop will be held in conjunction with UbiComp/ISWC2023.</p>

<p><strong>Important Dates</strong><br>
Submission Deadline: <strong>June 12th, 2023 (Extended)</strong><s>June 5th, 2023</s><br>
Acceptance Notification: June 30th, 2023<br>
Camera-ready: July 10th, 2023<br>
Workshop: Octobar 8th, 2023<br></p>

<p><strong>Notice</strong>
This year, the venue of HASCA 2023 workshop will be Cancun, Mexico.</p>

				

				
			
				
				
				<h2 id="h595">Abstract</h2>
				

				
			
				
				
				<p>The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpora and improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.</p>

<p>The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):</p>

				

				
			
				
				
				<h2 id="h597">Data collection / Corpus construction</h2>
				

				
			
				
				
				<p>Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.</p>

				

				
			
				
				
				<h2 id="h599">Effectiveness of Data / Data Centric Research</h2>
				

				
			
				
				
				<p>There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.</p>

				

				
			
				
				
				<h2 id="h601">Tools and Algorithms for Activity Recognition</h2>
				

				
			
				
				
				<p>If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.</p>

				

				
			
				
				
				<h2 id="h603">Real World Application and Experiences</h2>
				

				
			
				
				
				<p>Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.</p>

				

				
			
				
				
				<h2 id="h605">Sensing Devices and Systems</h2>
				

				
			
				
				
				<p>Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.</p>

				

				
			
				
				
				<h2 id="h607">Mobile experience sampling, experience sampling strategies: </h2>
				

				
			
				
				
				<p >Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.</p>
				

				
			
				
				
				<h2 id="h609">Unsupervised pattern discovery</h2>
				

				
			
				
				
				<p >Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.</p>
				

				
			
				
				
				<h2 id="h611">Dataset acquisition and annotation through crowd-sourcing, web-mining</h2>
				

				
			
				
				
				<p >A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.</p>
				

				
			
				
				
				<h2 id="h613">Transfer learning, semi-supervised learning, lifelong learning</h2>
				

				
			
				
				
				<p >The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.</p>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<guid isPermaLink="true">http://hasca2023.hasc.jp/entry-54.html</guid>
			<pubDate>Fri, 07 Apr 2023 12:53:11 +0900</pubDate>
		</item>
	</channel>
</rss>