Automatic Extraction of Urban Outdoor Perception from Geolocated
Free-Texts
- URL: http://arxiv.org/abs/2010.06444v1
- Date: Tue, 13 Oct 2020 14:59:46 GMT
- Title: Automatic Extraction of Urban Outdoor Perception from Geolocated
Free-Texts
- Authors: Frances Santos, Thiago H Silva, Antonio A F Loureiro, Leandro Villas
- Abstract summary: We propose an automatic and generic approach to extract people's perceptions.
We exemplify our approach in the context of urban outdoor areas in Chicago, New York City and London.
We show that our approach can be helpful to better understand urban areas considering different perspectives.
- Score: 1.8419317899207144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automatic extraction of urban perception shared by people on
location-based social networks (LBSNs) is an important multidisciplinary
research goal. One of the reasons is because it facilitates the understanding
of the intrinsic characteristics of urban areas in a scalable way, helping to
leverage new services. However, content shared on LBSNs is diverse,
encompassing several topics, such as politics, sports, culture, religion, and
urban perceptions, making the task of content extraction regarding a particular
topic very challenging. Considering free-text messages shared on LBSNs, we
propose an automatic and generic approach to extract people's perceptions. For
that, our approach explores opinions that are spatial-temporal and semantically
similar. We exemplify our approach in the context of urban outdoor areas in
Chicago, New York City and London. Studying those areas, we found evidence that
LBSN data brings valuable information about urban regions. To analyze and
validate our outcomes, we conducted a temporal analysis to measure the results'
robustness over time. We show that our approach can be helpful to better
understand urban areas considering different perspectives. We also conducted a
comparative analysis based on a public dataset, which contains volunteers'
perceptions regarding urban areas expressed in a controlled experiment. We
observe that both results yield a very similar level of agreement.
Related papers
- UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in Multi-View Urban Scenarios [60.492736455572015]
We present UrBench, a benchmark designed for evaluating LMMs in complex multi-view urban scenarios.
UrBench contains 11.6K meticulously curated questions at both region-level and role-level.
Our evaluations on 21 LMMs show that current LMMs struggle in the urban environments in several aspects.
arXiv Detail & Related papers (2024-08-30T13:13:35Z) - Uncover the nature of overlapping community in cities [4.497897224837208]
Our study introduces a graph-based physics-aware deep learning framework, illuminating the intricate overlapping nature inherent in urban communities.
Through analysis of individual mobile phone positioning data at Twin Cities metro area (TCMA) in Minnesota, USA, our findings reveal that 95.7 % of urban functional complexity stems from the overlapping structure of communities during weekdays.
arXiv Detail & Related papers (2024-01-31T22:50:49Z) - Geo-located Aspect Based Sentiment Analysis (ABSA) for Crowdsourced
Evaluation of Urban Environments [0.0]
We develop an ABSA model capable of extracting urban aspects contained within geo-located textual urban appraisals, along with corresponding aspect sentiment classification.
Our model achieves significant improvement in prediction accuracy on urban reviews, for both Aspect Term Extraction (ATE) and Aspect Sentiment Classification (ASC) tasks.
For demonstrative analysis, positive and negative urban aspects across Boston are spatially visualized.
arXiv Detail & Related papers (2023-12-19T15:37:27Z) - Cross-City Matters: A Multimodal Remote Sensing Benchmark Dataset for
Cross-City Semantic Segmentation using High-Resolution Domain Adaptation
Networks [82.82866901799565]
We build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task.
Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN, to promote the AI model's generalization ability from the multi-city environments.
HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion.
arXiv Detail & Related papers (2023-09-26T23:55:39Z) - The Urban Toolkit: A Grammar-based Framework for Urban Visual Analytics [5.674216760436341]
The complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights.
When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers.
This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science.
arXiv Detail & Related papers (2023-08-15T13:43:04Z) - Multi-Temporal Relationship Inference in Urban Areas [75.86026742632528]
Finding temporal relationships among locations can benefit a bunch of urban applications, such as dynamic offline advertising and smart public transport planning.
We propose a solution to Trial with a graph learning scheme, which includes a spatially evolving graph neural network (SEENet)
SEConv performs the intra-time aggregation and inter-time propagation to capture the multifaceted spatially evolving contexts from the view of location message passing.
SE-SSL designs time-aware self-supervised learning tasks in a global-local manner with additional evolving constraint to enhance the location representation learning and further handle the relationship sparsity.
arXiv Detail & Related papers (2023-06-15T07:48:32Z) - A Contextual Master-Slave Framework on Urban Region Graph for Urban
Village Detection [68.84486900183853]
We build an urban region graph (URG) to model the urban area in a hierarchically structured way.
Then, we design a novel contextual master-slave framework to effectively detect the urban village from the URG.
The proposed framework can learn to balance the generality and specificity for UV detection in an urban area.
arXiv Detail & Related papers (2022-11-26T18:17:39Z) - Effective Urban Region Representation Learning Using Heterogeneous Urban
Graph Attention Network (HUGAT) [0.0]
We propose heterogeneous urban graph attention network (HUGAT) for learning the representations of urban regions.
In our experiments on NYC data, HUGAT outperformed all the state-of-the-art models.
arXiv Detail & Related papers (2022-02-18T04:59:20Z) - Methodological Foundation of a Numerical Taxonomy of Urban Form [62.997667081978825]
We present a method for numerical taxonomy of urban form derived from biological systematics.
We derive homogeneous urban tissue types and, by determining overall morphological similarity between them, generate a hierarchical classification of urban form.
After framing and presenting the method, we test it on two cities - Prague and Amsterdam.
arXiv Detail & Related papers (2021-04-30T12:47:52Z) - Discovering Underground Maps from Fashion [80.02941583103612]
We propose a method to automatically create underground neighborhood maps of cities by analyzing how people dress.
Using publicly available images from across a city, our method finds neighborhoods with a similar fashion sense and segments the map without supervision.
arXiv Detail & Related papers (2020-12-04T23:40:59Z) - City limits in the age of smartphones and urban scaling [0.0]
Urban planning still lacks appropriate standards to define city boundaries across urban systems.
ICT provide the potential to portray more accurate descriptions of the urban systems.
We apply computational techniques over a large volume of mobile phone records to define urban boundaries.
arXiv Detail & Related papers (2020-05-06T17:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.