Base datasets
Collection
Basic datasets from which other combined datasets are formed.
•
6 items
•
Updated
id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
sequencelengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1906.10370
|
Miguel Sepulcre
|
Rafael Molina-Masegosa, Miguel Sepulcre and Javier Gozalvez
|
Geo-Based Scheduling for C-V2X Networks
| null |
IEEE Transactions on Vehicular Technology, Volume 68, Issue 9, pp.
8397 - 8407, September 2019
|
10.1109/TVT.2019.2924698
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cellular Vehicle-to-Everything (C-V2X) networks can operate without cellular
infrastructure support. Vehicles can autonomously select their radio resources
using the sensing-based Semi-Persistent Scheduling (SPS) algorithm specified by
the Third Generation Partnership Project (3GPP). The sensing nature of the SPS
scheme makes C-V2X communications prone to the well-known hidden-terminal
problem. To address this problem, this paper proposes a novel geo-based
scheduling scheme that allows vehicles to autonomously select their radio
resources based on the location and ordering of neighboring vehicles on the
road. The proposed scheme results in an implicit resource selection
coordination between vehicles (even with those outside the sensing range) that
reduces packet collisions. This paper evaluates analytically and through
simulations the proposed scheduling scheme. The obtained results demonstrate
that it reduces packet collisions and significantly increases the C-V2X
performance compared to when using the sensing-based SPS scheme.
|
[
{
"created": "Tue, 25 Jun 2019 08:12:28 GMT",
"version": "v1"
}
] |
2019-09-20
|
[
[
"Molina-Masegosa",
"Rafael",
""
],
[
"Sepulcre",
"Miguel",
""
],
[
"Gozalvez",
"Javier",
""
]
] |
Cellular Vehicle-to-Everything (C-V2X) networks can operate without cellular infrastructure support. Vehicles can autonomously select their radio resources using the sensing-based Semi-Persistent Scheduling (SPS) algorithm specified by the Third Generation Partnership Project (3GPP). The sensing nature of the SPS scheme makes C-V2X communications prone to the well-known hidden-terminal problem. To address this problem, this paper proposes a novel geo-based scheduling scheme that allows vehicles to autonomously select their radio resources based on the location and ordering of neighboring vehicles on the road. The proposed scheme results in an implicit resource selection coordination between vehicles (even with those outside the sensing range) that reduces packet collisions. This paper evaluates analytically and through simulations the proposed scheduling scheme. The obtained results demonstrate that it reduces packet collisions and significantly increases the C-V2X performance compared to when using the sensing-based SPS scheme.
|
2312.07340
|
Yusen Feng
|
Yusen Feng, Xiyan Xu, Libin Liu
|
MuscleVAE: Model-Based Controllers of Muscle-Actuated Characters
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a simulation and control framework for generating
biomechanically plausible motion for muscle-actuated characters. We incorporate
a fatigue dynamics model, the 3CC-r model, into the widely-adopted Hill-type
muscle model to simulate the development and recovery of fatigue in muscles,
which creates a natural evolution of motion style caused by the accumulation of
fatigue from prolonged activities. To address the challenging problem of
controlling a musculoskeletal system with high degrees of freedom, we propose a
novel muscle-space control strategy based on PD control. Our simulation and
control framework facilitates the training of a generative model for
muscle-based motion control, which we refer to as MuscleVAE. By leveraging the
variational autoencoders (VAEs), MuscleVAE is capable of learning a rich and
flexible latent representation of skills from a large unstructured motion
dataset, encoding not only motion features but also muscle control and fatigue
properties. We demonstrate that the MuscleVAE model can be efficiently trained
using a model-based approach, resulting in the production of high-fidelity
motions and enabling a variety of downstream tasks.
|
[
{
"created": "Tue, 12 Dec 2023 15:01:17 GMT",
"version": "v1"
}
] |
2023-12-13
|
[
[
"Feng",
"Yusen",
""
],
[
"Xu",
"Xiyan",
""
],
[
"Liu",
"Libin",
""
]
] |
In this paper, we present a simulation and control framework for generating biomechanically plausible motion for muscle-actuated characters. We incorporate a fatigue dynamics model, the 3CC-r model, into the widely-adopted Hill-type muscle model to simulate the development and recovery of fatigue in muscles, which creates a natural evolution of motion style caused by the accumulation of fatigue from prolonged activities. To address the challenging problem of controlling a musculoskeletal system with high degrees of freedom, we propose a novel muscle-space control strategy based on PD control. Our simulation and control framework facilitates the training of a generative model for muscle-based motion control, which we refer to as MuscleVAE. By leveraging the variational autoencoders (VAEs), MuscleVAE is capable of learning a rich and flexible latent representation of skills from a large unstructured motion dataset, encoding not only motion features but also muscle control and fatigue properties. We demonstrate that the MuscleVAE model can be efficiently trained using a model-based approach, resulting in the production of high-fidelity motions and enabling a variety of downstream tasks.
|
2211.08292
|
Jennifer Andreoli-Fang
|
Jennifer Andreoli-Fang, John T Chapman
|
Mobile-Aware Scheduling for Low Latency Backhaul over DOCSIS
|
IEEE International Symposium on Personal, Indoor and Mobile Radio
Communications (PIMRC), 2017
| null |
10.1109/PIMRC.2017.8292173
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we discuss latency reduction techniques for mobile backhaul
over Data Over Cable Service Interface Specifications (DOCSIS) networks. When
the latencies from both the wireless and the DOCSIS networks are added
together, it can result in noticeable end-to-end system latency, particularly
under network congestion. Previously, we proposed a method to improve upstream
user-to-mobile core latency by coordinating the LTE and DOCSIS scheduling. The
method reduces the impact on system latency from the DOCSIS network's
request-grant-data loop, which is the main contributor of backhaul upstream
latency. Since the method reduces latency on the DOCSIS data path, it will
therefore improve performance of latency sensitive applications, particularly
if TCP is used as the transport protocol, especially when the link is
congested. In this paper, we investigate the effect of HARQ failure on system
performance. Through simulation, we show that despite the uncertainty
introduced by the LTE protocol, coordinated scheduling improves overall system
latency.
|
[
{
"created": "Tue, 15 Nov 2022 16:45:26 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Nov 2022 16:38:44 GMT",
"version": "v2"
}
] |
2022-11-17
|
[
[
"Andreoli-Fang",
"Jennifer",
""
],
[
"Chapman",
"John T",
""
]
] |
In this paper, we discuss latency reduction techniques for mobile backhaul over Data Over Cable Service Interface Specifications (DOCSIS) networks. When the latencies from both the wireless and the DOCSIS networks are added together, it can result in noticeable end-to-end system latency, particularly under network congestion. Previously, we proposed a method to improve upstream user-to-mobile core latency by coordinating the LTE and DOCSIS scheduling. The method reduces the impact on system latency from the DOCSIS network's request-grant-data loop, which is the main contributor of backhaul upstream latency. Since the method reduces latency on the DOCSIS data path, it will therefore improve performance of latency sensitive applications, particularly if TCP is used as the transport protocol, especially when the link is congested. In this paper, we investigate the effect of HARQ failure on system performance. Through simulation, we show that despite the uncertainty introduced by the LTE protocol, coordinated scheduling improves overall system latency.
|
2103.14856
|
Ciriaco Andrea D'Angelo
|
Giovanni Abramo, Ciriaco Andrea D'Angelo, Lin Zhang
|
A comparison of two approaches for measuring interdisciplinary research
output: the disciplinary diversity of authors vs the disciplinary diversity
of the reference list
| null |
Journal of Informetrics, 12(4), 2018, 1182-1193
|
10.1016/j.joi.2018.09.001
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study investigates the convergence of two bibliometric approaches to the
measurement of interdisciplinary research: one based on analyzing disciplinary
diversity in the reference list of publications, the other based on the
disciplinary diversity of authors of publications. In particular we measure the
variety, balance, disparity and integrated diversity index of, respectively,
single-author, multi-author single-field, and multi-author multi-field
publications. We find that, in general, the diversity of the reference list
grows with the number of fields reflected in a paper's authors' list and, to a
lesser extent, with the number of authors being equal the number of fields.
Further, we find that when fields belonging to different disciplines are
reflected in the authors' list, the disparity in the reference list is higher
than in the case of fields belonging to the same discipline. However, this
general tendency varies across disciplines, and noticeable exceptions are found
at individual paper level.
|
[
{
"created": "Sat, 27 Mar 2021 09:34:53 GMT",
"version": "v1"
}
] |
2021-03-30
|
[
[
"Abramo",
"Giovanni",
""
],
[
"D'Angelo",
"Ciriaco Andrea",
""
],
[
"Zhang",
"Lin",
""
]
] |
This study investigates the convergence of two bibliometric approaches to the measurement of interdisciplinary research: one based on analyzing disciplinary diversity in the reference list of publications, the other based on the disciplinary diversity of authors of publications. In particular we measure the variety, balance, disparity and integrated diversity index of, respectively, single-author, multi-author single-field, and multi-author multi-field publications. We find that, in general, the diversity of the reference list grows with the number of fields reflected in a paper's authors' list and, to a lesser extent, with the number of authors being equal the number of fields. Further, we find that when fields belonging to different disciplines are reflected in the authors' list, the disparity in the reference list is higher than in the case of fields belonging to the same discipline. However, this general tendency varies across disciplines, and noticeable exceptions are found at individual paper level.
|
1910.12435
|
Marcel Keller
|
Anders Dalskov and Daniel Escudero and Marcel Keller
|
Secure Evaluation of Quantized Neural Networks
|
22 pages
|
Proceedings on Privacy Enhancing Technologies 4 (2020): 355-375
|
10.2478/popets-2020-0077
| null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We investigate two questions in this paper: First, we ask to what extent "MPC
friendly" models are already supported by major Machine Learning frameworks
such as TensorFlow or PyTorch. Prior works provide protocols that only work on
fixed-point integers and specialized activation functions, two aspects that are
not supported by popular Machine Learning frameworks, and the need for these
specialized model representations means that it is hard, and often impossible,
to use e.g., TensorFlow to design, train and test models that later have to be
evaluated securely. Second, we ask to what extent the functionality for
evaluating Neural Networks already exists in general-purpose MPC frameworks.
These frameworks have received more scrutiny, are better documented and
supported on more platforms. Furthermore, they are typically flexible in terms
of the threat model they support. In contrast, most secure evaluation protocols
in the literature are targeted to a specific threat model and their
implementations are only a "proof-of-concept", making it very hard for their
adoption in practice. We answer both of the above questions in a positive way:
We observe that the quantization techniques supported by both TensorFlow,
PyTorch and MXNet can provide models in a representation that can be evaluated
securely; and moreover, that this evaluation can be performed by a general
purpose MPC framework. We perform extensive benchmarks to understand the exact
trade-offs between different corruption models, network sizes and efficiency.
These experiments provide an interesting insight into cost between active and
passive security, as well as honest and dishonest majority. Our work shows then
that the separating line between existing ML frameworks and existing MPC
protocols may be narrower than implicitly suggested by previous works.
|
[
{
"created": "Mon, 28 Oct 2019 04:17:33 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 04:22:07 GMT",
"version": "v2"
}
] |
2021-03-02
|
[
[
"Dalskov",
"Anders",
""
],
[
"Escudero",
"Daniel",
""
],
[
"Keller",
"Marcel",
""
]
] |
We investigate two questions in this paper: First, we ask to what extent "MPC friendly" models are already supported by major Machine Learning frameworks such as TensorFlow or PyTorch. Prior works provide protocols that only work on fixed-point integers and specialized activation functions, two aspects that are not supported by popular Machine Learning frameworks, and the need for these specialized model representations means that it is hard, and often impossible, to use e.g., TensorFlow to design, train and test models that later have to be evaluated securely. Second, we ask to what extent the functionality for evaluating Neural Networks already exists in general-purpose MPC frameworks. These frameworks have received more scrutiny, are better documented and supported on more platforms. Furthermore, they are typically flexible in terms of the threat model they support. In contrast, most secure evaluation protocols in the literature are targeted to a specific threat model and their implementations are only a "proof-of-concept", making it very hard for their adoption in practice. We answer both of the above questions in a positive way: We observe that the quantization techniques supported by both TensorFlow, PyTorch and MXNet can provide models in a representation that can be evaluated securely; and moreover, that this evaluation can be performed by a general purpose MPC framework. We perform extensive benchmarks to understand the exact trade-offs between different corruption models, network sizes and efficiency. These experiments provide an interesting insight into cost between active and passive security, as well as honest and dishonest majority. Our work shows then that the separating line between existing ML frameworks and existing MPC protocols may be narrower than implicitly suggested by previous works.
|
2407.06855
|
Arnab Sharma
|
Sourabh Kapoor, Arnab Sharma, Michael R\"oder, Caglar Demir,
Axel-Cyrille Ngonga Ngomo
|
Performance Evaluation of Knowledge Graph Embedding Approaches under
Non-adversarial Attacks
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge Graph Embedding (KGE) transforms a discrete Knowledge Graph (KG)
into a continuous vector space facilitating its use in various AI-driven
applications like Semantic Search, Question Answering, or Recommenders. While
KGE approaches are effective in these applications, most existing approaches
assume that all information in the given KG is correct. This enables attackers
to influence the output of these approaches, e.g., by perturbing the input.
Consequently, the robustness of such KGE approaches has to be addressed. Recent
work focused on adversarial attacks. However, non-adversarial attacks on all
attack surfaces of these approaches have not been thoroughly examined. We close
this gap by evaluating the impact of non-adversarial attacks on the performance
of 5 state-of-the-art KGE algorithms on 5 datasets with respect to attacks on 3
attack surfaces-graph, parameter, and label perturbation. Our evaluation
results suggest that label perturbation has a strong effect on the KGE
performance, followed by parameter perturbation with a moderate and graph with
a low effect.
|
[
{
"created": "Tue, 9 Jul 2024 13:42:14 GMT",
"version": "v1"
}
] |
2024-07-10
|
[
[
"Kapoor",
"Sourabh",
""
],
[
"Sharma",
"Arnab",
""
],
[
"Röder",
"Michael",
""
],
[
"Demir",
"Caglar",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] |
Knowledge Graph Embedding (KGE) transforms a discrete Knowledge Graph (KG) into a continuous vector space facilitating its use in various AI-driven applications like Semantic Search, Question Answering, or Recommenders. While KGE approaches are effective in these applications, most existing approaches assume that all information in the given KG is correct. This enables attackers to influence the output of these approaches, e.g., by perturbing the input. Consequently, the robustness of such KGE approaches has to be addressed. Recent work focused on adversarial attacks. However, non-adversarial attacks on all attack surfaces of these approaches have not been thoroughly examined. We close this gap by evaluating the impact of non-adversarial attacks on the performance of 5 state-of-the-art KGE algorithms on 5 datasets with respect to attacks on 3 attack surfaces-graph, parameter, and label perturbation. Our evaluation results suggest that label perturbation has a strong effect on the KGE performance, followed by parameter perturbation with a moderate and graph with a low effect.
|
1211.1861
|
Mohamed Firdhous
|
Mohamed Firdhous
|
Automating Legal Research through Data Mining
|
8 pages, 11 figures, published in (IJACSA) International Journal of
Advanced Computer Science and Applications. arXiv admin note: text overlap
with wikipedia entry on text mining
|
International Journal of Advanced Computer Science and
Applications (IJACSA) Vol. 1, No 6, December 2010, pp. 9-16
| null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The term legal research generally refers to the process of identifying and
retrieving appropriate information necessary to support legal decision making
from past case records. At present, the process is mostly manual, but some
traditional technologies such as keyword searching are commonly used to speed
the process up. But a keyword search is not a comprehensive search to cater to
the requirements of legal research as the search result includes too many false
hits in terms of irrelevant case records. Hence the present generic tools
cannot be used to automate legal research.
This paper presents a framework which was developed by combining several Text
Mining techniques to automate the process overcoming the difficulties in the
existing methods. Further, the research also identifies the possible
enhancements that could be done to enhance the effectiveness of the framework.
|
[
{
"created": "Thu, 8 Nov 2012 14:19:49 GMT",
"version": "v1"
}
] |
2012-11-09
|
[
[
"Firdhous",
"Mohamed",
""
]
] |
The term legal research generally refers to the process of identifying and retrieving appropriate information necessary to support legal decision making from past case records. At present, the process is mostly manual, but some traditional technologies such as keyword searching are commonly used to speed the process up. But a keyword search is not a comprehensive search to cater to the requirements of legal research as the search result includes too many false hits in terms of irrelevant case records. Hence the present generic tools cannot be used to automate legal research. This paper presents a framework which was developed by combining several Text Mining techniques to automate the process overcoming the difficulties in the existing methods. Further, the research also identifies the possible enhancements that could be done to enhance the effectiveness of the framework.
|
1602.00810
|
Jean-Guillaume Dumas
|
Jean-Guillaume Dumas (LJK), Erich Kaltofen (NCSU), Emmanuel Thom\'e
(CARAMBA), Gilles Villard (ARIC, LIP)
|
Linear Time Interactive Certificates for the Minimal Polynomial and the
Determinant of a Sparse Matrix
| null |
International Symposium on Symbolic and Algebraic Computation, Jul
2016, Waterloo, Canada. pp.199-206,
\&\#x27E8;10.1145/2930889.2930908\&\#x27E9
| null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational problem certificates are additional data structures for each
output, which can be used by a-possibly randomized-verification algorithm that
proves the correctness of each output. In this paper, we give an algorithm that
computes a certificate for the minimal polynomial of sparse or structured nxn
matrices over an abstract field, of sufficiently large cardinality, whose Monte
Carlo verification complexity requires a single matrix-vector multiplication
and a linear number of extra field operations. We also propose a novel
preconditioner that ensures irreducibility of the characteristic polynomial of
the generically preconditioned matrix. This preconditioner takes linear time to
be applied and uses only two random entries. We then combine these two
techniques to give algorithms that compute certificates for the determinant,
and thus for the characteristic polynomial, whose Monte Carlo verification
complexity is therefore also linear.
|
[
{
"created": "Tue, 2 Feb 2016 07:29:28 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Dec 2019 13:02:25 GMT",
"version": "v2"
}
] |
2019-12-03
|
[
[
"Dumas",
"Jean-Guillaume",
"",
"LJK"
],
[
"Kaltofen",
"Erich",
"",
"NCSU"
],
[
"Thomé",
"Emmanuel",
"",
"CARAMBA"
],
[
"Villard",
"Gilles",
"",
"ARIC, LIP"
]
] |
Computational problem certificates are additional data structures for each output, which can be used by a-possibly randomized-verification algorithm that proves the correctness of each output. In this paper, we give an algorithm that computes a certificate for the minimal polynomial of sparse or structured nxn matrices over an abstract field, of sufficiently large cardinality, whose Monte Carlo verification complexity requires a single matrix-vector multiplication and a linear number of extra field operations. We also propose a novel preconditioner that ensures irreducibility of the characteristic polynomial of the generically preconditioned matrix. This preconditioner takes linear time to be applied and uses only two random entries. We then combine these two techniques to give algorithms that compute certificates for the determinant, and thus for the characteristic polynomial, whose Monte Carlo verification complexity is therefore also linear.
|
1912.07976
|
Heng Yang
|
Heng Yang, Biqing Zeng, JianHao Yang, Youwei Song and Ruyang Xu
|
A Multi-task Learning Model for Chinese-oriented Aspect Polarity
Classification and Aspect Term Extraction
|
Submitted to Elsevier
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect-based sentiment analysis (ABSA) task is a multi-grained task of
natural language processing and consists of two subtasks: aspect term
extraction (ATE) and aspect polarity classification (APC). Most of the existing
work focuses on the subtask of aspect term polarity inferring and ignores the
significance of aspect term extraction. Besides, the existing researches do not
pay attention to the research of the Chinese-oriented ABSA task. Based on the
local context focus (LCF) mechanism, this paper firstly proposes a multi-task
learning model for Chinese-oriented aspect-based sentiment analysis, namely
LCF-ATEPC. Compared with existing models, this model equips the capability of
extracting aspect term and inferring aspect term polarity synchronously,
moreover, this model is effective to analyze both Chinese and English comments
simultaneously and the experiment on a multilingual mixed dataset proved its
availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model
achieved the state-of-the-art performance of aspect term extraction and aspect
polarity classification in four Chinese review datasets. Besides, the
experimental results on the most commonly used SemEval-2014 task4 Restaurant
and Laptop datasets outperform the state-of-the-art performance on the ATE and
APC subtask.
|
[
{
"created": "Tue, 17 Dec 2019 12:47:33 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Dec 2019 01:38:38 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Feb 2020 09:20:28 GMT",
"version": "v3"
}
] |
2020-02-13
|
[
[
"Yang",
"Heng",
""
],
[
"Zeng",
"Biqing",
""
],
[
"Yang",
"JianHao",
""
],
[
"Song",
"Youwei",
""
],
[
"Xu",
"Ruyang",
""
]
] |
Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask.
|
2007.00584
|
Guillaume Jaume
|
Pushpak Pati, Guillaume Jaume, Lauren Alisha Fernandes, Antonio
Foncubierta, Florinda Feroce, Anna Maria Anniciello, Giosue Scognamiglio,
Nadia Brancati, Daniel Riccio, Maurizio Do Bonito, Giuseppe De Pietro,
Gerardo Botti, Orcun Goksel, Jean-Philippe Thiran, Maria Frucci, Maria
Gabrani
|
HACT-Net: A Hierarchical Cell-to-Tissue Graph Neural Network for
Histopathological Image Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cancer diagnosis, prognosis, and therapeutic response prediction are heavily
influenced by the relationship between the histopathological structures and the
function of the tissue. Recent approaches acknowledging the structure-function
relationship, have linked the structural and spatial patterns of cell
organization in tissue via cell-graphs to tumor grades. Though cell
organization is imperative, it is insufficient to entirely represent the
histopathological structure. We propose a novel hierarchical
cell-to-tissue-graph (HACT) representation to improve the structural depiction
of the tissue. It consists of a low-level cell-graph, capturing cell morphology
and interactions, a high-level tissue-graph, capturing morphology and spatial
distribution of tissue parts, and cells-to-tissue hierarchies, encoding the
relative spatial distribution of the cells with respect to the tissue
distribution. Further, a hierarchical graph neural network (HACT-Net) is
proposed to efficiently map the HACT representations to histopathological
breast cancer subtypes. We assess the methodology on a large set of annotated
tissue regions of interest from H\&E stained breast carcinoma whole-slides.
Upon evaluation, the proposed method outperformed recent convolutional neural
network and graph neural network approaches for breast cancer multi-class
subtyping. The proposed entity-based topological analysis is more inline with
the pathological diagnostic procedure of the tissue. It provides more command
over the tissue modelling, therefore encourages the further inclusion of
pathological priors into task-specific tissue representation.
|
[
{
"created": "Wed, 1 Jul 2020 16:22:48 GMT",
"version": "v1"
}
] |
2020-07-02
|
[
[
"Pati",
"Pushpak",
""
],
[
"Jaume",
"Guillaume",
""
],
[
"Fernandes",
"Lauren Alisha",
""
],
[
"Foncubierta",
"Antonio",
""
],
[
"Feroce",
"Florinda",
""
],
[
"Anniciello",
"Anna Maria",
""
],
[
"Scognamiglio",
"Giosue",
""
],
[
"Brancati",
"Nadia",
""
],
[
"Riccio",
"Daniel",
""
],
[
"Bonito",
"Maurizio Do",
""
],
[
"De Pietro",
"Giuseppe",
""
],
[
"Botti",
"Gerardo",
""
],
[
"Goksel",
"Orcun",
""
],
[
"Thiran",
"Jean-Philippe",
""
],
[
"Frucci",
"Maria",
""
],
[
"Gabrani",
"Maria",
""
]
] |
Cancer diagnosis, prognosis, and therapeutic response prediction are heavily influenced by the relationship between the histopathological structures and the function of the tissue. Recent approaches acknowledging the structure-function relationship, have linked the structural and spatial patterns of cell organization in tissue via cell-graphs to tumor grades. Though cell organization is imperative, it is insufficient to entirely represent the histopathological structure. We propose a novel hierarchical cell-to-tissue-graph (HACT) representation to improve the structural depiction of the tissue. It consists of a low-level cell-graph, capturing cell morphology and interactions, a high-level tissue-graph, capturing morphology and spatial distribution of tissue parts, and cells-to-tissue hierarchies, encoding the relative spatial distribution of the cells with respect to the tissue distribution. Further, a hierarchical graph neural network (HACT-Net) is proposed to efficiently map the HACT representations to histopathological breast cancer subtypes. We assess the methodology on a large set of annotated tissue regions of interest from H\&E stained breast carcinoma whole-slides. Upon evaluation, the proposed method outperformed recent convolutional neural network and graph neural network approaches for breast cancer multi-class subtyping. The proposed entity-based topological analysis is more inline with the pathological diagnostic procedure of the tissue. It provides more command over the tissue modelling, therefore encourages the further inclusion of pathological priors into task-specific tissue representation.
|
0810.5057
|
Patricia Gautier
|
Claire Fran\c{c}ois (INIST), Jean-Charles Lamirel (INRIA Lorraine -
LORIA), Shadi Al Shehabi (INRIA Lorraine - LORIA)
|
Combining Advanced Visualization and Automatized Reasoning for
Webometrics: A Test Study
| null |
COLLNET 2006, France (2006)
| null | null |
cs.IR cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a first attempt at performing a precise and automatic
identification of the linking behaviour in a scientific domain through the
analysis of the communication of the related academic institutions on the web.
The proposed approach is based on the paradigm of multiple viewpoint data
analysis (MVDA) than can be fruitfully exploited to highlight relationships
between data, like websites, carrying several kinds of description. It uses the
MultiSOM clustering and mapping method. The domain that has been chosen for
this study is the domain of Computer Science in Germany. The analysis is
conduced on a set of 438 websites of this domain using all together, thematic,
geographic and linking information. It highlights interesting results
concerning both global and local linking behaviour.
|
[
{
"created": "Tue, 28 Oct 2008 15:43:45 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Oct 2009 15:27:41 GMT",
"version": "v2"
}
] |
2009-10-20
|
[
[
"François",
"Claire",
"",
"INIST"
],
[
"Lamirel",
"Jean-Charles",
"",
"INRIA Lorraine -\n LORIA"
],
[
"Shehabi",
"Shadi Al",
"",
"INRIA Lorraine - LORIA"
]
] |
This paper presents a first attempt at performing a precise and automatic identification of the linking behaviour in a scientific domain through the analysis of the communication of the related academic institutions on the web. The proposed approach is based on the paradigm of multiple viewpoint data analysis (MVDA) than can be fruitfully exploited to highlight relationships between data, like websites, carrying several kinds of description. It uses the MultiSOM clustering and mapping method. The domain that has been chosen for this study is the domain of Computer Science in Germany. The analysis is conduced on a set of 438 websites of this domain using all together, thematic, geographic and linking information. It highlights interesting results concerning both global and local linking behaviour.
|
0808.3651
|
Lijun Zhang
|
Lijun Zhang, Holger Hermanns, Friedrich Eisenbrand and David N. Jansen
|
Flow Faster: Efficient Decision Algorithms for Probabilistic Simulations
|
LMCS
|
Logical Methods in Computer Science, Volume 4, Issue 4 (November
11, 2008) lmcs:989
|
10.2168/LMCS-4(4:6)2008
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strong and weak simulation relations have been proposed for Markov chains,
while strong simulation and strong probabilistic simulation relations have been
proposed for probabilistic automata. However, decision algorithms for strong
and weak simulation over Markov chains, and for strong simulation over
probabilistic automata are not efficient, which makes it as yet unclear whether
they can be used as effectively as their non-probabilistic counterparts. This
paper presents drastically improved algorithms to decide whether some
(discrete- or continuous-time) Markov chain strongly or weakly simulates
another, or whether a probabilistic automaton strongly simulates another. The
key innovation is the use of parametric maximum flow techniques to amortize
computations. We also present a novel algorithm for deciding strong
probabilistic simulation preorders on probabilistic automata, which has
polynomial complexity via a reduction to an LP problem. When extending the
algorithms for probabilistic automata to their continuous-time counterpart, we
retain the same complexity for both strong and strong probabilistic
simulations.
|
[
{
"created": "Wed, 27 Aug 2008 08:35:44 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Nov 2008 23:56:01 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Nov 2008 17:00:30 GMT",
"version": "v3"
}
] |
2015-07-01
|
[
[
"Zhang",
"Lijun",
""
],
[
"Hermanns",
"Holger",
""
],
[
"Eisenbrand",
"Friedrich",
""
],
[
"Jansen",
"David N.",
""
]
] |
Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong simulation over probabilistic automata are not efficient, which makes it as yet unclear whether they can be used as effectively as their non-probabilistic counterparts. This paper presents drastically improved algorithms to decide whether some (discrete- or continuous-time) Markov chain strongly or weakly simulates another, or whether a probabilistic automaton strongly simulates another. The key innovation is the use of parametric maximum flow techniques to amortize computations. We also present a novel algorithm for deciding strong probabilistic simulation preorders on probabilistic automata, which has polynomial complexity via a reduction to an LP problem. When extending the algorithms for probabilistic automata to their continuous-time counterpart, we retain the same complexity for both strong and strong probabilistic simulations.
|
2403.06757
|
Anthony Frion
|
Anthony Frion, Lucas Drumetz, Guillaume Tochon, Mauro Dalla Mura,
Albdeldjalil A\"issa El Bey
|
Koopman Ensembles for Probabilistic Time Series Forecasting
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In the context of an increasing popularity of data-driven models to represent
dynamical systems, many machine learning-based implementations of the Koopman
operator have recently been proposed. However, the vast majority of those works
are limited to deterministic predictions, while the knowledge of uncertainty is
critical in fields like meteorology and climatology. In this work, we
investigate the training of ensembles of models to produce stochastic outputs.
We show through experiments on real remote sensing image time series that
ensembles of independently trained models are highly overconfident and that
using a training criterion that explicitly encourages the members to produce
predictions with high inter-model variances greatly improves the uncertainty
quantification of the ensembles.
|
[
{
"created": "Mon, 11 Mar 2024 14:29:56 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Mar 2024 13:57:42 GMT",
"version": "v2"
}
] |
2024-03-14
|
[
[
"Frion",
"Anthony",
""
],
[
"Drumetz",
"Lucas",
""
],
[
"Tochon",
"Guillaume",
""
],
[
"Mura",
"Mauro Dalla",
""
],
[
"Bey",
"Albdeldjalil Aïssa El",
""
]
] |
In the context of an increasing popularity of data-driven models to represent dynamical systems, many machine learning-based implementations of the Koopman operator have recently been proposed. However, the vast majority of those works are limited to deterministic predictions, while the knowledge of uncertainty is critical in fields like meteorology and climatology. In this work, we investigate the training of ensembles of models to produce stochastic outputs. We show through experiments on real remote sensing image time series that ensembles of independently trained models are highly overconfident and that using a training criterion that explicitly encourages the members to produce predictions with high inter-model variances greatly improves the uncertainty quantification of the ensembles.
|
1702.06968
|
Arthur-Jozsef Molnar
|
Arthur-Jozsef Molnar
|
A Heuristic Process for GUI Widget Matching Across Application Versions
| null |
Annales Universitatis Scientiarum Budapest, Sectio Computatorica
Vol 36, pages 255 - 275 (2012)
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces an automated heuristic process able to achieve high
accuracy when matching graphical user interface widgets across multiple
versions of a target application. The proposed implementation is flexible as it
allows full customization of the process and easy integration with existing
tools for long term graphical user interface test case maintenance, software
visualization and analysis.
|
[
{
"created": "Wed, 22 Feb 2017 19:08:11 GMT",
"version": "v1"
}
] |
2017-02-24
|
[
[
"Molnar",
"Arthur-Jozsef",
""
]
] |
This paper introduces an automated heuristic process able to achieve high accuracy when matching graphical user interface widgets across multiple versions of a target application. The proposed implementation is flexible as it allows full customization of the process and easy integration with existing tools for long term graphical user interface test case maintenance, software visualization and analysis.
|
2307.10936
|
Raphael Boige
|
Raphael Boige and Yannis Flet-Berliac and Arthur Flajolet and
Guillaume Richard and Thomas Pierrot
|
PASTA: Pretrained Action-State Transformer Agents
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning has brought about a revolutionary paradigm shift in
various computing domains, including NLP, vision, and biology. Recent
approaches involve pre-training transformer models on vast amounts of unlabeled
data, serving as a starting point for efficiently solving downstream tasks. In
reinforcement learning, researchers have recently adapted these approaches,
developing models pre-trained on expert trajectories. This advancement enables
the models to tackle a broad spectrum of tasks, ranging from robotics to
recommendation systems. However, existing methods mostly rely on intricate
pre-training objectives tailored to specific downstream applications. This
paper conducts a comprehensive investigation of models, referred to as
pre-trained action-state transformer agents (PASTA). Our study covers a unified
methodology and covers an extensive set of general downstream tasks including
behavioral cloning, offline RL, sensor failure robustness, and dynamics change
adaptation. Our objective is to systematically compare various design choices
and offer valuable insights that will aid practitioners in developing robust
models. Key highlights of our study include tokenization at the component level
for actions and states, the use of fundamental pre-training objectives such as
next token prediction or masked language modeling, simultaneous training of
models across multiple domains, and the application of various fine-tuning
strategies. In this study, the developed models contain fewer than 7 million
parameters allowing a broad community to use these models and reproduce our
experiments. We hope that this study will encourage further research into the
use of transformers with first principle design choices to represent RL
trajectories and contribute to robust policy learning.
|
[
{
"created": "Thu, 20 Jul 2023 15:09:06 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Dec 2023 10:15:26 GMT",
"version": "v2"
}
] |
2023-12-05
|
[
[
"Boige",
"Raphael",
""
],
[
"Flet-Berliac",
"Yannis",
""
],
[
"Flajolet",
"Arthur",
""
],
[
"Richard",
"Guillaume",
""
],
[
"Pierrot",
"Thomas",
""
]
] |
Self-supervised learning has brought about a revolutionary paradigm shift in various computing domains, including NLP, vision, and biology. Recent approaches involve pre-training transformer models on vast amounts of unlabeled data, serving as a starting point for efficiently solving downstream tasks. In reinforcement learning, researchers have recently adapted these approaches, developing models pre-trained on expert trajectories. This advancement enables the models to tackle a broad spectrum of tasks, ranging from robotics to recommendation systems. However, existing methods mostly rely on intricate pre-training objectives tailored to specific downstream applications. This paper conducts a comprehensive investigation of models, referred to as pre-trained action-state transformer agents (PASTA). Our study covers a unified methodology and covers an extensive set of general downstream tasks including behavioral cloning, offline RL, sensor failure robustness, and dynamics change adaptation. Our objective is to systematically compare various design choices and offer valuable insights that will aid practitioners in developing robust models. Key highlights of our study include tokenization at the component level for actions and states, the use of fundamental pre-training objectives such as next token prediction or masked language modeling, simultaneous training of models across multiple domains, and the application of various fine-tuning strategies. In this study, the developed models contain fewer than 7 million parameters allowing a broad community to use these models and reproduce our experiments. We hope that this study will encourage further research into the use of transformers with first principle design choices to represent RL trajectories and contribute to robust policy learning.
|
1612.00604
|
Andrii Maksai
|
Andrii Maksai, Xinchao Wang, Francois Fleuret, and Pascal Fua
|
Globally Consistent Multi-People Tracking using Motion Patterns
|
8 pages, 7 figures. 11 pages supplementary
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many state-of-the-art approaches to people tracking rely on detecting them in
each frame independently, grouping detections into short but reliable
trajectory segments, and then further grouping them into full trajectories.
This grouping typically relies on imposing local smoothness constraints but
almost never on enforcing more global constraints on the trajectories. In this
paper, we propose an approach to imposing global consistency by first inferring
behavioral patterns from the ground truth and then using them to guide the
tracking algorithm. When used in conjunction with several state-of-the-art
algorithms, this further increases their already good performance. Furthermore,
we propose an unsupervised scheme that yields almost similar improvements
without the need for ground truth.
|
[
{
"created": "Fri, 2 Dec 2016 09:24:30 GMT",
"version": "v1"
}
] |
2016-12-05
|
[
[
"Maksai",
"Andrii",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Fleuret",
"Francois",
""
],
[
"Fua",
"Pascal",
""
]
] |
Many state-of-the-art approaches to people tracking rely on detecting them in each frame independently, grouping detections into short but reliable trajectory segments, and then further grouping them into full trajectories. This grouping typically relies on imposing local smoothness constraints but almost never on enforcing more global constraints on the trajectories. In this paper, we propose an approach to imposing global consistency by first inferring behavioral patterns from the ground truth and then using them to guide the tracking algorithm. When used in conjunction with several state-of-the-art algorithms, this further increases their already good performance. Furthermore, we propose an unsupervised scheme that yields almost similar improvements without the need for ground truth.
|
2311.11532
|
Gustavo Silva
|
Gustavo Silva, Paul Rodriguez
|
Optimal Hyperparameter $\epsilon$ for Adaptive Stochastic Optimizers
through Gradient Histograms
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Optimizers are essential components for successfully training deep neural
network models. In order to achieve the best performance from such models,
designers need to carefully choose the optimizer hyperparameters. However, this
can be a computationally expensive and time-consuming process. Although it is
known that all optimizer hyperparameters must be tuned for maximum performance,
there is still a lack of clarity regarding the individual influence of minor
priority hyperparameters, including the safeguard factor $\epsilon$ and
momentum factor $\beta$, in leading adaptive optimizers (specifically, those
based on the Adam optimizers). In this manuscript, we introduce a new framework
based on gradient histograms to analyze and justify important attributes of
adaptive optimizers, such as their optimal performance and the relationships
and dependencies among hyperparameters. Furthermore, we propose a novel
gradient histogram-based algorithm that automatically estimates a reduced and
accurate search space for the safeguard hyperparameter $\epsilon$, where the
optimal value can be easily found.
|
[
{
"created": "Mon, 20 Nov 2023 04:34:19 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Silva",
"Gustavo",
""
],
[
"Rodriguez",
"Paul",
""
]
] |
Optimizers are essential components for successfully training deep neural network models. In order to achieve the best performance from such models, designers need to carefully choose the optimizer hyperparameters. However, this can be a computationally expensive and time-consuming process. Although it is known that all optimizer hyperparameters must be tuned for maximum performance, there is still a lack of clarity regarding the individual influence of minor priority hyperparameters, including the safeguard factor $\epsilon$ and momentum factor $\beta$, in leading adaptive optimizers (specifically, those based on the Adam optimizers). In this manuscript, we introduce a new framework based on gradient histograms to analyze and justify important attributes of adaptive optimizers, such as their optimal performance and the relationships and dependencies among hyperparameters. Furthermore, we propose a novel gradient histogram-based algorithm that automatically estimates a reduced and accurate search space for the safeguard hyperparameter $\epsilon$, where the optimal value can be easily found.
|
2302.02187
|
Niklas K\"uhl Prof Dr
|
Max Schemmer, Niklas K\"uhl, Carina Benz, Andrea Bartos, Gerhard
Satzger
|
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
Explanations
|
arXiv admin note: text overlap with arXiv:2204.06916
|
ACM 28th International Conference on Intelligent User Interfaces
(IUI), 2023
|
10.1145/3581641.3584066
| null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
AI advice is becoming increasingly popular, e.g., in investment and medical
treatment decisions. As this advice is typically imperfect, decision-makers
have to exert discretion as to whether actually follow that advice: they have
to "appropriately" rely on correct and turn down incorrect advice. However,
current research on appropriate reliance still lacks a common definition as
well as an operational measurement concept. Additionally, no in-depth
behavioral experiments have been conducted that help understand the factors
influencing this behavior. In this paper, we propose Appropriateness of
Reliance (AoR) as an underlying, quantifiable two-dimensional measurement
concept. We develop a research model that analyzes the effect of providing
explanations for AI advice. In an experiment with 200 participants, we
demonstrate how these explanations influence the AoR, and, thus, the
effectiveness of AI advice. Our work contributes fundamental concepts for the
analysis of reliance behavior and the purposeful design of AI advisors.
|
[
{
"created": "Sat, 4 Feb 2023 15:48:24 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Feb 2023 07:47:43 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Apr 2023 08:50:16 GMT",
"version": "v3"
}
] |
2023-04-14
|
[
[
"Schemmer",
"Max",
""
],
[
"Kühl",
"Niklas",
""
],
[
"Benz",
"Carina",
""
],
[
"Bartos",
"Andrea",
""
],
[
"Satzger",
"Gerhard",
""
]
] |
AI advice is becoming increasingly popular, e.g., in investment and medical treatment decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to whether actually follow that advice: they have to "appropriately" rely on correct and turn down incorrect advice. However, current research on appropriate reliance still lacks a common definition as well as an operational measurement concept. Additionally, no in-depth behavioral experiments have been conducted that help understand the factors influencing this behavior. In this paper, we propose Appropriateness of Reliance (AoR) as an underlying, quantifiable two-dimensional measurement concept. We develop a research model that analyzes the effect of providing explanations for AI advice. In an experiment with 200 participants, we demonstrate how these explanations influence the AoR, and, thus, the effectiveness of AI advice. Our work contributes fundamental concepts for the analysis of reliance behavior and the purposeful design of AI advisors.
|
1101.1815
|
Suvansh Lal Mr.
|
Suvansh Lal, Mohit Jain, Vikrant Chaplot
|
Approaches to Formal Verification of Security Protocols
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/3.0/
|
In recent times, many protocols have been proposed to provide security for
various information and communication systems. Such protocols must be tested
for their functional correctness before they are used in practice. Application
of formal methods for verification of security protocols would enhance their
reliability thereby, increasing the usability of systems that employ them.
Thus, formal verification of security protocols has become a key issue in
computer and communications security. In this paper we present, analyze and
compare some prevalent approaches towards verification of secure systems. We
follow the notion of - same goal through different approaches - as we formally
analyze the Needham Schroeder Public Key protocol for Lowe's attack using each
of our presented approaches.
|
[
{
"created": "Mon, 10 Jan 2011 13:53:25 GMT",
"version": "v1"
}
] |
2011-01-11
|
[
[
"Lal",
"Suvansh",
""
],
[
"Jain",
"Mohit",
""
],
[
"Chaplot",
"Vikrant",
""
]
] |
In recent times, many protocols have been proposed to provide security for various information and communication systems. Such protocols must be tested for their functional correctness before they are used in practice. Application of formal methods for verification of security protocols would enhance their reliability thereby, increasing the usability of systems that employ them. Thus, formal verification of security protocols has become a key issue in computer and communications security. In this paper we present, analyze and compare some prevalent approaches towards verification of secure systems. We follow the notion of - same goal through different approaches - as we formally analyze the Needham Schroeder Public Key protocol for Lowe's attack using each of our presented approaches.
|
2405.11530
|
Sejik Park
|
Sejik Park
|
Learning More Generalized Experts by Merging Experts in
Mixture-of-Experts
|
12 pages, 3 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We observe that incorporating a shared layer in a mixture-of-experts can lead
to performance degradation. This leads us to hypothesize that learning shared
features poses challenges in deep learning, potentially caused by the same
feature being learned as various different features. To address this issue, we
track each expert's usage frequency and merge the two most frequently selected
experts. We then update the least frequently selected expert using the
combination of experts. This approach, combined with the subsequent learning of
the router's expert selection, allows the model to determine if the most
frequently selected experts have learned the same feature differently. If they
have, the combined expert can be further trained to learn a more general
feature. Consequently, our algorithm enhances transfer learning and mitigates
catastrophic forgetting when applied to multi-domain task incremental learning.
|
[
{
"created": "Sun, 19 May 2024 11:55:48 GMT",
"version": "v1"
}
] |
2024-05-21
|
[
[
"Park",
"Sejik",
""
]
] |
We observe that incorporating a shared layer in a mixture-of-experts can lead to performance degradation. This leads us to hypothesize that learning shared features poses challenges in deep learning, potentially caused by the same feature being learned as various different features. To address this issue, we track each expert's usage frequency and merge the two most frequently selected experts. We then update the least frequently selected expert using the combination of experts. This approach, combined with the subsequent learning of the router's expert selection, allows the model to determine if the most frequently selected experts have learned the same feature differently. If they have, the combined expert can be further trained to learn a more general feature. Consequently, our algorithm enhances transfer learning and mitigates catastrophic forgetting when applied to multi-domain task incremental learning.
|
1701.01216
|
Tony T. Luo
|
T. Luo, S. S. Kanhere, H-P. Tan, F. Wu, and H. Wu
|
Crowdsourcing with Tullock contests: A new perspective
|
9 pages, 4 figures, 3 tables
|
Proc. IEEE INFOCOM, 2015, pp. 2515-2523
|
10.1109/INFOCOM.2015.7218641
| null |
cs.GT cs.HC cs.MA cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Incentive mechanisms for crowdsourcing have been extensively studied under
the framework of all-pay auctions. Along a distinct line, this paper proposes
to use Tullock contests as an alternative tool to design incentive mechanisms
for crowdsourcing. We are inspired by the conduciveness of Tullock contests to
attracting user entry (yet not necessarily a higher revenue) in other domains.
In this paper, we explore a new dimension in optimal Tullock contest design, by
superseding the contest prize---which is fixed in conventional Tullock
contests---with a prize function that is dependent on the (unknown) winner's
contribution, in order to maximize the crowdsourcer's utility. We show that
this approach leads to attractive practical advantages: (a) it is well-suited
for rapid prototyping in fully distributed web agents and smartphone apps; (b)
it overcomes the disincentive to participate caused by players' antagonism to
an increasing number of rivals. Furthermore, we optimize conventional,
fixed-prize Tullock contests to construct the most superior benchmark to
compare against our mechanism. Through extensive evaluations, we show that our
mechanism significantly outperforms the optimal benchmark, by over three folds
on the crowdsourcer's utility cum profit and up to nine folds on the players'
social welfare.
|
[
{
"created": "Thu, 5 Jan 2017 05:44:25 GMT",
"version": "v1"
}
] |
2017-01-06
|
[
[
"Luo",
"T.",
""
],
[
"Kanhere",
"S. S.",
""
],
[
"Tan",
"H-P.",
""
],
[
"Wu",
"F.",
""
],
[
"Wu",
"H.",
""
]
] |
Incentive mechanisms for crowdsourcing have been extensively studied under the framework of all-pay auctions. Along a distinct line, this paper proposes to use Tullock contests as an alternative tool to design incentive mechanisms for crowdsourcing. We are inspired by the conduciveness of Tullock contests to attracting user entry (yet not necessarily a higher revenue) in other domains. In this paper, we explore a new dimension in optimal Tullock contest design, by superseding the contest prize---which is fixed in conventional Tullock contests---with a prize function that is dependent on the (unknown) winner's contribution, in order to maximize the crowdsourcer's utility. We show that this approach leads to attractive practical advantages: (a) it is well-suited for rapid prototyping in fully distributed web agents and smartphone apps; (b) it overcomes the disincentive to participate caused by players' antagonism to an increasing number of rivals. Furthermore, we optimize conventional, fixed-prize Tullock contests to construct the most superior benchmark to compare against our mechanism. Through extensive evaluations, we show that our mechanism significantly outperforms the optimal benchmark, by over three folds on the crowdsourcer's utility cum profit and up to nine folds on the players' social welfare.
|
1704.00355
|
Neha Gupta
|
Moses Charikar, Neha Gupta, Roy Schwartz
|
Local Guarantees in Graph Cuts and Clustering
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correlation Clustering is an elegant model that captures fundamental graph
cut problems such as Min $s-t$ Cut, Multiway Cut, and Multicut, extensively
studied in combinatorial optimization. Here, we are given a graph with edges
labeled $+$ or $-$ and the goal is to produce a clustering that agrees with the
labels as much as possible: $+$ edges within clusters and $-$ edges across
clusters. The classical approach towards Correlation Clustering (and other
graph cut problems) is to optimize a global objective. We depart from this and
study local objectives: minimizing the maximum number of disagreements for
edges incident on a single node, and the analogous max min agreements
objective. This naturally gives rise to a family of basic min-max graph cut
problems. A prototypical representative is Min Max $s-t$ Cut: find an $s-t$ cut
minimizing the largest number of cut edges incident on any node. We present the
following results: $(1)$ an $O(\sqrt{n})$-approximation for the problem of
minimizing the maximum total weight of disagreement edges incident on any node
(thus providing the first known approximation for the above family of min-max
graph cut problems), $(2)$ a remarkably simple $7$-approximation for minimizing
local disagreements in complete graphs (improving upon the previous best known
approximation of $48$), and $(3)$ a $1/(2+\varepsilon)$-approximation for
maximizing the minimum total weight of agreement edges incident on any node,
hence improving upon the $1/(4+\varepsilon)$-approximation that follows from
the study of approximate pure Nash equilibria in cut and party affiliation
games.
|
[
{
"created": "Sun, 2 Apr 2017 19:34:22 GMT",
"version": "v1"
}
] |
2017-04-04
|
[
[
"Charikar",
"Moses",
""
],
[
"Gupta",
"Neha",
""
],
[
"Schwartz",
"Roy",
""
]
] |
Correlation Clustering is an elegant model that captures fundamental graph cut problems such as Min $s-t$ Cut, Multiway Cut, and Multicut, extensively studied in combinatorial optimization. Here, we are given a graph with edges labeled $+$ or $-$ and the goal is to produce a clustering that agrees with the labels as much as possible: $+$ edges within clusters and $-$ edges across clusters. The classical approach towards Correlation Clustering (and other graph cut problems) is to optimize a global objective. We depart from this and study local objectives: minimizing the maximum number of disagreements for edges incident on a single node, and the analogous max min agreements objective. This naturally gives rise to a family of basic min-max graph cut problems. A prototypical representative is Min Max $s-t$ Cut: find an $s-t$ cut minimizing the largest number of cut edges incident on any node. We present the following results: $(1)$ an $O(\sqrt{n})$-approximation for the problem of minimizing the maximum total weight of disagreement edges incident on any node (thus providing the first known approximation for the above family of min-max graph cut problems), $(2)$ a remarkably simple $7$-approximation for minimizing local disagreements in complete graphs (improving upon the previous best known approximation of $48$), and $(3)$ a $1/(2+\varepsilon)$-approximation for maximizing the minimum total weight of agreement edges incident on any node, hence improving upon the $1/(4+\varepsilon)$-approximation that follows from the study of approximate pure Nash equilibria in cut and party affiliation games.
|
1502.03358
|
Farshad Naghibi
|
Farshad Naghibi, Somayeh Salimi, Mikael Skoglund
|
The CEO Problem with Secrecy Constraints
|
Accepted for publication in IEEE Transactions on Information
Forensics and Security, 17 pages, 4 figures
| null |
10.1109/TIFS.2015.2404134
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a lossy source coding problem with secrecy constraints in which a
remote information source should be transmitted to a single destination via
multiple agents in the presence of a passive eavesdropper. The agents observe
noisy versions of the source and independently encode and transmit their
observations to the destination via noiseless rate-limited links. The
destination should estimate the remote source based on the information received
from the agents within a certain mean distortion threshold. The eavesdropper,
with access to side information correlated to the source, is able to listen in
on one of the links from the agents to the destination in order to obtain as
much information as possible about the source. This problem can be viewed as
the so-called CEO problem with additional secrecy constraints. We establish
inner and outer bounds on the rate-distortion-equivocation region of this
problem. We also obtain the region in special cases where the bounds are tight.
Furthermore, we study the quadratic Gaussian case and provide the optimal
rate-distortion-equivocation region when the eavesdropper has no side
information and an achievable region for a more general setup with side
information at the eavesdropper.
|
[
{
"created": "Wed, 11 Feb 2015 16:19:21 GMT",
"version": "v1"
}
] |
2015-02-19
|
[
[
"Naghibi",
"Farshad",
""
],
[
"Salimi",
"Somayeh",
""
],
[
"Skoglund",
"Mikael",
""
]
] |
We study a lossy source coding problem with secrecy constraints in which a remote information source should be transmitted to a single destination via multiple agents in the presence of a passive eavesdropper. The agents observe noisy versions of the source and independently encode and transmit their observations to the destination via noiseless rate-limited links. The destination should estimate the remote source based on the information received from the agents within a certain mean distortion threshold. The eavesdropper, with access to side information correlated to the source, is able to listen in on one of the links from the agents to the destination in order to obtain as much information as possible about the source. This problem can be viewed as the so-called CEO problem with additional secrecy constraints. We establish inner and outer bounds on the rate-distortion-equivocation region of this problem. We also obtain the region in special cases where the bounds are tight. Furthermore, we study the quadratic Gaussian case and provide the optimal rate-distortion-equivocation region when the eavesdropper has no side information and an achievable region for a more general setup with side information at the eavesdropper.
|
2007.00236
|
Cunxiang Wang
|
Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu and
Yue Zhang
|
SemEval-2020 Task 4: Commonsense Validation and Explanation
|
Task description paper of SemEval-2020 Task 4: Commonsense Validation
and Explanation
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present SemEval-2020 Task 4, Commonsense Validation and
Explanation (ComVE), which includes three subtasks, aiming to evaluate whether
a system can distinguish a natural language statement that makes sense to
humans from one that does not, and provide the reasons. Specifically, in our
first subtask, the participating systems are required to choose from two
natural language statements of similar wording the one that makes sense and the
one does not. The second subtask additionally asks a system to select the key
reason from three options why a given statement does not make sense. In the
third subtask, a participating system needs to generate the reason. We finally
attracted 39 teams participating at least one of the three subtasks. For
Subtask A and Subtask B, the performances of top-ranked systems are close to
that of humans. However, for Subtask C, there is still a relatively large gap
between systems and human performance. The dataset used in our task can be
found at https://github.com/wangcunxiang/SemEval2020-
Task4-Commonsense-Validation-and-Explanation; The leaderboard can be found at
https://competitions.codalab.org/competitions/21080#results.
|
[
{
"created": "Wed, 1 Jul 2020 04:41:05 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Aug 2020 15:13:40 GMT",
"version": "v2"
}
] |
2020-08-04
|
[
[
"Wang",
"Cunxiang",
""
],
[
"Liang",
"Shuailong",
""
],
[
"Jin",
"Yili",
""
],
[
"Wang",
"Yilong",
""
],
[
"Zhu",
"Xiaodan",
""
],
[
"Zhang",
"Yue",
""
]
] |
In this paper, we present SemEval-2020 Task 4, Commonsense Validation and Explanation (ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish a natural language statement that makes sense to humans from one that does not, and provide the reasons. Specifically, in our first subtask, the participating systems are required to choose from two natural language statements of similar wording the one that makes sense and the one does not. The second subtask additionally asks a system to select the key reason from three options why a given statement does not make sense. In the third subtask, a participating system needs to generate the reason. We finally attracted 39 teams participating at least one of the three subtasks. For Subtask A and Subtask B, the performances of top-ranked systems are close to that of humans. However, for Subtask C, there is still a relatively large gap between systems and human performance. The dataset used in our task can be found at https://github.com/wangcunxiang/SemEval2020- Task4-Commonsense-Validation-and-Explanation; The leaderboard can be found at https://competitions.codalab.org/competitions/21080#results.
|
2304.13518
|
Yuqi Han
|
Yuqi Han and Tao Yu and Xiaohang Yu and Yuwang Wang and Qionghai Dai
|
Super-NeRF: View-consistent Detail Generation for NeRF super-resolution
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The neural radiance field (NeRF) achieved remarkable success in modeling 3D
scenes and synthesizing high-fidelity novel views. However, existing NeRF-based
methods focus more on the make full use of the image resolution to generate
novel views, but less considering the generation of details under the limited
input resolution. In analogy to the extensive usage of image super-resolution,
NeRF super-resolution is an effective way to generate the high-resolution
implicit representation of 3D scenes and holds great potential applications. Up
to now, such an important topic is still under-explored. In this paper, we
propose a NeRF super-resolution method, named Super-NeRF, to generate
high-resolution NeRF from only low-resolution inputs. Given multi-view
low-resolution images, Super-NeRF constructs a consistency-controlling
super-resolution module to generate view-consistent high-resolution details for
NeRF. Specifically, an optimizable latent code is introduced for each
low-resolution input image to control the 2D super-resolution images to
converge to the view-consistent output. The latent codes of each low-resolution
image are optimized synergistically with the target Super-NeRF representation
to fully utilize the view consistency constraint inherent in NeRF construction.
We verify the effectiveness of Super-NeRF on synthetic, real-world, and
AI-generated NeRF datasets. Super-NeRF achieves state-of-the-art NeRF
super-resolution performance on high-resolution detail generation and
cross-view consistency.
|
[
{
"created": "Wed, 26 Apr 2023 12:54:40 GMT",
"version": "v1"
}
] |
2023-04-27
|
[
[
"Han",
"Yuqi",
""
],
[
"Yu",
"Tao",
""
],
[
"Yu",
"Xiaohang",
""
],
[
"Wang",
"Yuwang",
""
],
[
"Dai",
"Qionghai",
""
]
] |
The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on the make full use of the image resolution to generate novel views, but less considering the generation of details under the limited input resolution. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate the high-resolution implicit representation of 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a consistency-controlling super-resolution module to generate view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each low-resolution input image to control the 2D super-resolution images to converge to the view-consistent output. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to fully utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and AI-generated NeRF datasets. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.
|
1809.05606
|
Yimin Yang
|
Yimin Yang, Q.M.Jonathan Wu, Xiexing Feng, Thangarajah Akilan
|
Non-iterative recomputation of dense layers for performance improvement
of DCNN
|
11
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An iterative method of learning has become a paradigm for training deep
convolutional neural networks (DCNN). However, utilizing a non-iterative
learning strategy can accelerate the training process of the DCNN and
surprisingly such approach has been rarely explored by the deep learning (DL)
community. It motivates this paper to introduce a non-iterative learning
strategy that eliminates the backpropagation (BP) at the top dense or fully
connected (FC) layers of DCNN, resulting in, lower training time and higher
performance. The proposed method exploits the Moore-Penrose Inverse to pull
back the current residual error to each FC layer, generating well-generalized
features. Then using the recomputed features, i.e., the new generalized
features the weights of each FC layer is computed according to the
Moore-Penrose Inverse. We evaluate the proposed approach on six widely accepted
object recognition benchmark datasets: Scene-15, CIFAR-10, CIFAR-100, SUN-397,
Places365, and ImageNet. The experimental results show that the proposed method
obtains significant improvements over 30 state-of-the-art methods.
Interestingly, it also indicates that any DCNN with the proposed method can
provide better performance than the same network with its original training
based on BP.
|
[
{
"created": "Fri, 14 Sep 2018 22:24:52 GMT",
"version": "v1"
}
] |
2018-09-18
|
[
[
"Yang",
"Yimin",
""
],
[
"Wu",
"Q. M. Jonathan",
""
],
[
"Feng",
"Xiexing",
""
],
[
"Akilan",
"Thangarajah",
""
]
] |
An iterative method of learning has become a paradigm for training deep convolutional neural networks (DCNN). However, utilizing a non-iterative learning strategy can accelerate the training process of the DCNN and surprisingly such approach has been rarely explored by the deep learning (DL) community. It motivates this paper to introduce a non-iterative learning strategy that eliminates the backpropagation (BP) at the top dense or fully connected (FC) layers of DCNN, resulting in, lower training time and higher performance. The proposed method exploits the Moore-Penrose Inverse to pull back the current residual error to each FC layer, generating well-generalized features. Then using the recomputed features, i.e., the new generalized features the weights of each FC layer is computed according to the Moore-Penrose Inverse. We evaluate the proposed approach on six widely accepted object recognition benchmark datasets: Scene-15, CIFAR-10, CIFAR-100, SUN-397, Places365, and ImageNet. The experimental results show that the proposed method obtains significant improvements over 30 state-of-the-art methods. Interestingly, it also indicates that any DCNN with the proposed method can provide better performance than the same network with its original training based on BP.
|
1309.0871
|
EPTCS
|
Oded Maler (CNRS-VERIMAG, University of Grenoble), \'Ad\'am M.
Hal\'asz (Department of Methematics, West Virginia University), Olivier
Lebeltel (CNRS-VERIMAG, University of Grenoble), Ouri Maler (Grenoble)
|
Exploring the Dynamics of Mass Action Systems
|
In Proceedings HSB 2013, arXiv:1308.5724
|
EPTCS 125, 2013, pp. 84-91
|
10.4204/EPTCS.125.6
| null |
cs.CE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Populus toolkit for exploring the dynamics of mass action
systems under different assumptions.
|
[
{
"created": "Tue, 3 Sep 2013 23:41:36 GMT",
"version": "v1"
}
] |
2013-09-05
|
[
[
"Maler",
"Oded",
"",
"CNRS-VERIMAG, University of Grenoble"
],
[
"Halász",
"Ádám M.",
"",
"Department of Methematics, West Virginia University"
],
[
"Lebeltel",
"Olivier",
"",
"CNRS-VERIMAG, University of Grenoble"
],
[
"Maler",
"Ouri",
"",
"Grenoble"
]
] |
We present the Populus toolkit for exploring the dynamics of mass action systems under different assumptions.
|
2403.05193
|
Martina Benini
|
Martina Benini, Silvia Gallucci, Marta Bonato, Marta Parazzini,
Gabriella Tognola
|
Evaluation of Road User Radio-Frequency Exposure Levels in an Urban
Environment from Vehicular Antennas and the Infrastructure in ITS-G5 5.9 GHz
Communication
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
This study aims to investigate the variability of exposure levels among road
users generated in a realistic urban scenario by Vehicle-to-Vehicle (V2V) and
Vehicle-to-Infrastructure (V2I) communication technologies operating at 5.9
GHz. The exposure levels were evaluated in terms of whole-body Specific
Absorption Rate (wbSAR) [W/kg] in three different human models, ranging from
children to adults. We calculated the electromagnetic field exposure level
generated by V2V and V2I using raytracing and we assessed wbSAR resulting in
urban exposure scenarios with an increasing number of transmitting antennas.
Whole-body SAR was generally very low, on the order of 10^-4 W/kg. The maximum
wbSAR, of 4.9x10^-4 W/kg, was obtained in the worst-case exposure condition
comprising more than one transmitting vehicle and was found in the adult model
for a distance within 10 m from the transmitting cars. We found that the height
of the human model highly impacted the exposure level. Namely, the child (which
is the shortest human model) was generally much less exposed than adults. All
the wbSAR values found by varying the number of transmitting antennas, the
distance of the road user from the antennas, and the type of human model (adult
vs. child) were very well below the limits set by the ICNIRP and IEEE
guidelines of 0.08 W/kg for human exposure in the 100 kHz - 300 GHz range.
|
[
{
"created": "Fri, 8 Mar 2024 10:14:40 GMT",
"version": "v1"
}
] |
2024-03-11
|
[
[
"Benini",
"Martina",
""
],
[
"Gallucci",
"Silvia",
""
],
[
"Bonato",
"Marta",
""
],
[
"Parazzini",
"Marta",
""
],
[
"Tognola",
"Gabriella",
""
]
] |
This study aims to investigate the variability of exposure levels among road users generated in a realistic urban scenario by Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication technologies operating at 5.9 GHz. The exposure levels were evaluated in terms of whole-body Specific Absorption Rate (wbSAR) [W/kg] in three different human models, ranging from children to adults. We calculated the electromagnetic field exposure level generated by V2V and V2I using raytracing and we assessed wbSAR resulting in urban exposure scenarios with an increasing number of transmitting antennas. Whole-body SAR was generally very low, on the order of 10^-4 W/kg. The maximum wbSAR, of 4.9x10^-4 W/kg, was obtained in the worst-case exposure condition comprising more than one transmitting vehicle and was found in the adult model for a distance within 10 m from the transmitting cars. We found that the height of the human model highly impacted the exposure level. Namely, the child (which is the shortest human model) was generally much less exposed than adults. All the wbSAR values found by varying the number of transmitting antennas, the distance of the road user from the antennas, and the type of human model (adult vs. child) were very well below the limits set by the ICNIRP and IEEE guidelines of 0.08 W/kg for human exposure in the 100 kHz - 300 GHz range.
|
2408.03945
|
Kristina Schaaff
|
Kristina Schaaff and Marc-Andr\'e Heidelmann
|
Impacts of Anthropomorphizing Large Language Models in Learning
Environments
|
Presented at Affective Computing Pre-Conference at ISRE 2024
| null | null | null |
cs.CL cs.AI cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) are increasingly being used in learning
environments to support teaching-be it as learning companions or as tutors.
With our contribution, we aim to discuss the implications of the
anthropomorphization of LLMs in learning environments on educational theory to
build a foundation for more effective learning outcomes and understand their
emotional impact on learners. According to the media equation, people tend to
respond to media in the same way as they would respond to another person. A
study conducted by the Georgia Institute of Technology showed that chatbots can
be successfully implemented in learning environments. In this study, learners
in selected online courses were unable to distinguish the chatbot from a "real"
teacher. As LLM-based chatbots such as OpenAI's GPT series are increasingly
used in educational tools, it is important to understand how the attribution
processes to LLM-based chatbots in terms of anthropomorphization affect
learners' emotions.
|
[
{
"created": "Mon, 22 Jul 2024 06:28:54 GMT",
"version": "v1"
}
] |
2024-08-09
|
[
[
"Schaaff",
"Kristina",
""
],
[
"Heidelmann",
"Marc-André",
""
]
] |
Large Language Models (LLMs) are increasingly being used in learning environments to support teaching-be it as learning companions or as tutors. With our contribution, we aim to discuss the implications of the anthropomorphization of LLMs in learning environments on educational theory to build a foundation for more effective learning outcomes and understand their emotional impact on learners. According to the media equation, people tend to respond to media in the same way as they would respond to another person. A study conducted by the Georgia Institute of Technology showed that chatbots can be successfully implemented in learning environments. In this study, learners in selected online courses were unable to distinguish the chatbot from a "real" teacher. As LLM-based chatbots such as OpenAI's GPT series are increasingly used in educational tools, it is important to understand how the attribution processes to LLM-based chatbots in terms of anthropomorphization affect learners' emotions.
|
2310.04197
|
Angel Casanova
|
\'Angel Casanova Bienzobas, Alfonso S\'anchez-Maci\'an
|
Threat Trekker: An Approach to Cyber Threat Hunting
|
I am disseminating this outcome to all of you, despite the fact that
the results may appear somewhat idealistic, given that certain datasets
utilized for the training of the machine learning model comprise simulated
data
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Threat hunting is a proactive methodology for exploring, detecting and
mitigating cyberattacks within complex environments. As opposed to conventional
detection systems, threat hunting strategies assume adversaries have
infiltrated the system; as a result they proactively search out any unusual
patterns or activities which might indicate intrusion attempts.
Historically, this endeavour has been pursued using three investigation
methodologies: (1) Hypothesis-Driven Investigations; (2) Indicator of
Compromise (IOC); and (3) High-level machine learning analysis-based
approaches. Therefore, this paper introduces a novel machine learning paradigm
known as Threat Trekker. This proposal utilizes connectors to feed data
directly into an event streaming channel for processing by the algorithm and
provide feedback back into its host network.
Conclusions drawn from these experiments clearly establish the efficacy of
employing machine learning for classifying more subtle attacks.
|
[
{
"created": "Fri, 6 Oct 2023 12:29:41 GMT",
"version": "v1"
}
] |
2023-10-09
|
[
[
"Bienzobas",
"Ángel Casanova",
""
],
[
"Sánchez-Macián",
"Alfonso",
""
]
] |
Threat hunting is a proactive methodology for exploring, detecting and mitigating cyberattacks within complex environments. As opposed to conventional detection systems, threat hunting strategies assume adversaries have infiltrated the system; as a result they proactively search out any unusual patterns or activities which might indicate intrusion attempts. Historically, this endeavour has been pursued using three investigation methodologies: (1) Hypothesis-Driven Investigations; (2) Indicator of Compromise (IOC); and (3) High-level machine learning analysis-based approaches. Therefore, this paper introduces a novel machine learning paradigm known as Threat Trekker. This proposal utilizes connectors to feed data directly into an event streaming channel for processing by the algorithm and provide feedback back into its host network. Conclusions drawn from these experiments clearly establish the efficacy of employing machine learning for classifying more subtle attacks.
|
2205.06255
|
Qianqian Wang
|
Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless,
Janne Kontkanen
|
3D Moments from Near-Duplicate Photos
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce 3D Moments, a new computational photography effect. As input we
take a pair of near-duplicate photos, i.e., photos of moving subjects from
similar viewpoints, common in people's photo collections. As output, we produce
a video that smoothly interpolates the scene motion from the first photo to the
second, while also producing camera motion with parallax that gives a
heightened sense of 3D. To achieve this effect, we represent the scene as a
pair of feature-based layered depth images augmented with scene flow. This
representation enables motion interpolation along with independent control of
the camera viewpoint. Our system produces photorealistic space-time videos with
motion parallax and scene dynamics, while plausibly recovering regions occluded
in the original views. We conduct extensive experiments demonstrating superior
performance over baselines on public datasets and in-the-wild photos. Project
page: https://3d-moments.github.io/
|
[
{
"created": "Thu, 12 May 2022 17:56:18 GMT",
"version": "v1"
}
] |
2022-05-13
|
[
[
"Wang",
"Qianqian",
""
],
[
"Li",
"Zhengqi",
""
],
[
"Salesin",
"David",
""
],
[
"Snavely",
"Noah",
""
],
[
"Curless",
"Brian",
""
],
[
"Kontkanen",
"Janne",
""
]
] |
We introduce 3D Moments, a new computational photography effect. As input we take a pair of near-duplicate photos, i.e., photos of moving subjects from similar viewpoints, common in people's photo collections. As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D. To achieve this effect, we represent the scene as a pair of feature-based layered depth images augmented with scene flow. This representation enables motion interpolation along with independent control of the camera viewpoint. Our system produces photorealistic space-time videos with motion parallax and scene dynamics, while plausibly recovering regions occluded in the original views. We conduct extensive experiments demonstrating superior performance over baselines on public datasets and in-the-wild photos. Project page: https://3d-moments.github.io/
|
2407.15373
|
Dizhi Ma
|
Dizhi Ma, Xiyun Hu, Jingyu Shi, Mayank Patel, Rahul Jain, Ziyi Liu,
Zhengzhe Zhu and Karthik Ramani
|
avaTTAR: Table Tennis Stroke Training with On-body and Detached
Visualization in Augmented Reality
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Table tennis stroke training is a critical aspect of player development. We
designed a new augmented reality (AR) system, avaTTAR, for table tennis stroke
training. The system provides both "on-body" (first-person view) and "detached"
(third-person view) visual cues, enabling users to visualize target strokes and
correct their attempts effectively with this dual perspectives setup. By
employing a combination of pose estimation algorithms and IMU sensors, avaTTAR
captures and reconstructs the 3D body pose and paddle orientation of users
during practice, allowing real-time comparison with expert strokes. Through a
user study, we affirm avaTTAR's capacity to amplify player experience and
training results.
|
[
{
"created": "Mon, 22 Jul 2024 04:47:16 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jul 2024 15:13:46 GMT",
"version": "v2"
}
] |
2024-07-29
|
[
[
"Ma",
"Dizhi",
""
],
[
"Hu",
"Xiyun",
""
],
[
"Shi",
"Jingyu",
""
],
[
"Patel",
"Mayank",
""
],
[
"Jain",
"Rahul",
""
],
[
"Liu",
"Ziyi",
""
],
[
"Zhu",
"Zhengzhe",
""
],
[
"Ramani",
"Karthik",
""
]
] |
Table tennis stroke training is a critical aspect of player development. We designed a new augmented reality (AR) system, avaTTAR, for table tennis stroke training. The system provides both "on-body" (first-person view) and "detached" (third-person view) visual cues, enabling users to visualize target strokes and correct their attempts effectively with this dual perspectives setup. By employing a combination of pose estimation algorithms and IMU sensors, avaTTAR captures and reconstructs the 3D body pose and paddle orientation of users during practice, allowing real-time comparison with expert strokes. Through a user study, we affirm avaTTAR's capacity to amplify player experience and training results.
|
2005.13102
|
Bangalore Ravi Kiran
|
Leonardo Gigli, B Ravi Kiran, Thomas Paul, Andres Serna, Nagarjuna
Vemuri, Beatriz Marcotegui, Santiago Velasco-Forero
|
Road Segmentation on low resolution Lidar point clouds for autonomous
vehicles
|
ISPRS 2020
| null | null | null |
cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud datasets for perception tasks in the context of autonomous
driving often rely on high resolution 64-layer Light Detection and Ranging
(LIDAR) scanners. They are expensive to deploy on real-world autonomous driving
sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the
effect of subsampling image based representations of dense point clouds on the
accuracy of the road segmentation task. In our experiments the low resolution
16/32 layer LIDAR point clouds are simulated by subsampling the original 64
layer data, for subsequent transformation in to a feature map in the
Bird-Eye-View (BEV) and SphericalView (SV) representations of the point cloud.
We introduce the usage of the local normal vector with the LIDAR's spherical
coordinates as an input channel to existing LoDNN architectures. We demonstrate
that this local normal feature in conjunction with classical features not only
improves performance for binary road segmentation on full resolution point
clouds, but it also reduces the negative impact on the accuracy when
subsampling dense point clouds as compared to the usage of classical features
alone. We assess our method with several experiments on two datasets: KITTI
Road-segmentation benchmark and the recently released Semantic KITTI dataset.
|
[
{
"created": "Wed, 27 May 2020 00:38:39 GMT",
"version": "v1"
}
] |
2020-05-28
|
[
[
"Gigli",
"Leonardo",
""
],
[
"Kiran",
"B Ravi",
""
],
[
"Paul",
"Thomas",
""
],
[
"Serna",
"Andres",
""
],
[
"Vemuri",
"Nagarjuna",
""
],
[
"Marcotegui",
"Beatriz",
""
],
[
"Velasco-Forero",
"Santiago",
""
]
] |
Point cloud datasets for perception tasks in the context of autonomous driving often rely on high resolution 64-layer Light Detection and Ranging (LIDAR) scanners. They are expensive to deploy on real-world autonomous driving sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the effect of subsampling image based representations of dense point clouds on the accuracy of the road segmentation task. In our experiments the low resolution 16/32 layer LIDAR point clouds are simulated by subsampling the original 64 layer data, for subsequent transformation in to a feature map in the Bird-Eye-View (BEV) and SphericalView (SV) representations of the point cloud. We introduce the usage of the local normal vector with the LIDAR's spherical coordinates as an input channel to existing LoDNN architectures. We demonstrate that this local normal feature in conjunction with classical features not only improves performance for binary road segmentation on full resolution point clouds, but it also reduces the negative impact on the accuracy when subsampling dense point clouds as compared to the usage of classical features alone. We assess our method with several experiments on two datasets: KITTI Road-segmentation benchmark and the recently released Semantic KITTI dataset.
|
2105.05026
|
Chongxuan Li
|
Guoqiang Wu, Chongxuan Li, Kun Xu, Jun Zhu
|
Rethinking and Reweighting the Univariate Losses for Multi-Label
Ranking: Consistency and Generalization
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
(Partial) ranking loss is a commonly used evaluation measure for multi-label
classification, which is usually optimized with convex surrogates for
computational efficiency. Prior theoretical work on multi-label ranking mainly
focuses on (Fisher) consistency analyses. However, there is a gap between
existing theory and practice -- some pairwise losses can lead to promising
performance but lack consistency, while some univariate losses are consistent
but usually have no clear superiority in practice. In this paper, we attempt to
fill this gap through a systematic study from two complementary perspectives of
consistency and generalization error bounds of learning algorithms. Our results
show that learning algorithms with the consistent univariate loss have an error
bound of $O(c)$ ($c$ is the number of labels), while algorithms with the
inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work. This
explains that the latter can achieve better performance than the former in
practice. Moreover, we present an inconsistent reweighted univariate loss-based
learning algorithm that enjoys an error bound of $O(\sqrt{c})$ for promising
performance as well as the computational efficiency of univariate losses.
Finally, experimental results validate our theoretical analyses.
|
[
{
"created": "Mon, 10 May 2021 09:23:27 GMT",
"version": "v1"
}
] |
2021-05-12
|
[
[
"Wu",
"Guoqiang",
""
],
[
"Li",
"Chongxuan",
""
],
[
"Xu",
"Kun",
""
],
[
"Zhu",
"Jun",
""
]
] |
(Partial) ranking loss is a commonly used evaluation measure for multi-label classification, which is usually optimized with convex surrogates for computational efficiency. Prior theoretical work on multi-label ranking mainly focuses on (Fisher) consistency analyses. However, there is a gap between existing theory and practice -- some pairwise losses can lead to promising performance but lack consistency, while some univariate losses are consistent but usually have no clear superiority in practice. In this paper, we attempt to fill this gap through a systematic study from two complementary perspectives of consistency and generalization error bounds of learning algorithms. Our results show that learning algorithms with the consistent univariate loss have an error bound of $O(c)$ ($c$ is the number of labels), while algorithms with the inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work. This explains that the latter can achieve better performance than the former in practice. Moreover, we present an inconsistent reweighted univariate loss-based learning algorithm that enjoys an error bound of $O(\sqrt{c})$ for promising performance as well as the computational efficiency of univariate losses. Finally, experimental results validate our theoretical analyses.
|
2403.14623
|
Zhicong Tang
|
Zhicong Tang, Tiankai Hang, Shuyang Gu, Dong Chen, Baining Guo
|
Simplified Diffusion Schr\"odinger Bridge
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel theoretical simplification of the Diffusion
Schr\"odinger Bridge (DSB) that facilitates its unification with Score-based
Generative Models (SGMs), addressing the limitations of DSB in complex data
generation and enabling faster convergence and enhanced performance. By
employing SGMs as an initial solution for DSB, our approach capitalizes on the
strengths of both frameworks, ensuring a more efficient training process and
improving the performance of SGM. We also propose a reparameterization
technique that, despite theoretical approximations, practically improves the
network's fitting capabilities. Our extensive experimental evaluations confirm
the effectiveness of the simplified DSB, demonstrating its significant
improvements. We believe the contributions of this work pave the way for
advanced generative modeling.
|
[
{
"created": "Thu, 21 Mar 2024 17:59:41 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Mar 2024 16:49:35 GMT",
"version": "v2"
},
{
"created": "Mon, 27 May 2024 04:44:22 GMT",
"version": "v3"
},
{
"created": "Tue, 13 Aug 2024 04:34:58 GMT",
"version": "v4"
}
] |
2024-08-14
|
[
[
"Tang",
"Zhicong",
""
],
[
"Hang",
"Tiankai",
""
],
[
"Gu",
"Shuyang",
""
],
[
"Chen",
"Dong",
""
],
[
"Guo",
"Baining",
""
]
] |
This paper introduces a novel theoretical simplification of the Diffusion Schr\"odinger Bridge (DSB) that facilitates its unification with Score-based Generative Models (SGMs), addressing the limitations of DSB in complex data generation and enabling faster convergence and enhanced performance. By employing SGMs as an initial solution for DSB, our approach capitalizes on the strengths of both frameworks, ensuring a more efficient training process and improving the performance of SGM. We also propose a reparameterization technique that, despite theoretical approximations, practically improves the network's fitting capabilities. Our extensive experimental evaluations confirm the effectiveness of the simplified DSB, demonstrating its significant improvements. We believe the contributions of this work pave the way for advanced generative modeling.
|
1511.04376
|
Martin Reisslein
|
Akhilesh Thyagaturu, Anu Mercian, Michael P. McGarry, Martin
Reisslein, Wolfgang Kellerer
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey
| null |
IEEE Communications Surveys & Tutorials, vol. 18, no. 4, pp.
2738-2786, 4th Qu. 2016
|
10.1109/COMST.2016.2586999
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging Software Defined Networking (SDN) paradigm separates the data
plane from the control plane and centralizes network control in an SDN
controller. Applications interact with controllers to implement network
services, such as network transport with Quality of Service (QoS). SDN
facilitates the virtualization of network functions so that multiple virtual
networks can operate over a given installed physical network infrastructure.
Due to the specific characteristics of optical (photonic) communication
components and the high optical transmission capacities, SDN based optical
networking poses particular challenges, but holds also great potential. In this
article, we comprehensively survey studies that examine the SDN paradigm in
optical networks; in brief, we survey the area of Software Defined Optical
Networks (SDONs). We mainly organize the SDON studies into studies focused on
the infrastructure layer, the control layer, and the application layer.
Moreover, we cover SDON studies focused on network virtualization, as well as
SDON studies focused on the orchestration of multilayer and multidomain
networking. Based on the survey, we identify open challenges for SDONs and
outline future directions.
|
[
{
"created": "Fri, 13 Nov 2015 17:31:10 GMT",
"version": "v1"
},
{
"created": "Thu, 26 May 2016 10:19:58 GMT",
"version": "v2"
},
{
"created": "Sun, 17 Jul 2016 07:46:47 GMT",
"version": "v3"
}
] |
2016-11-29
|
[
[
"Thyagaturu",
"Akhilesh",
""
],
[
"Mercian",
"Anu",
""
],
[
"McGarry",
"Michael P.",
""
],
[
"Reisslein",
"Martin",
""
],
[
"Kellerer",
"Wolfgang",
""
]
] |
The emerging Software Defined Networking (SDN) paradigm separates the data plane from the control plane and centralizes network control in an SDN controller. Applications interact with controllers to implement network services, such as network transport with Quality of Service (QoS). SDN facilitates the virtualization of network functions so that multiple virtual networks can operate over a given installed physical network infrastructure. Due to the specific characteristics of optical (photonic) communication components and the high optical transmission capacities, SDN based optical networking poses particular challenges, but holds also great potential. In this article, we comprehensively survey studies that examine the SDN paradigm in optical networks; in brief, we survey the area of Software Defined Optical Networks (SDONs). We mainly organize the SDON studies into studies focused on the infrastructure layer, the control layer, and the application layer. Moreover, we cover SDON studies focused on network virtualization, as well as SDON studies focused on the orchestration of multilayer and multidomain networking. Based on the survey, we identify open challenges for SDONs and outline future directions.
|
2306.02346
|
Shuo Ye
|
Shuo Ye and Yufeng Shi and Ruxin Wang and Yu Wang and Jiamiao Xu and
Chuanwu Yang and Xinge You
|
CDLT: A Dataset with Concept Drift and Long-Tailed Distribution for
Fine-Grained Visual Categorization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data is the foundation for the development of computer vision, and the
establishment of datasets plays an important role in advancing the techniques
of fine-grained visual categorization~(FGVC). In the existing FGVC datasets
used in computer vision, it is generally assumed that each collected instance
has fixed characteristics and the distribution of different categories is
relatively balanced. In contrast, the real world scenario reveals the fact that
the characteristics of instances tend to vary with time and exhibit a
long-tailed distribution. Hence, the collected datasets may mislead the
optimization of the fine-grained classifiers, resulting in unpleasant
performance in real applications. Starting from the real-world conditions and
to promote the practical progress of fine-grained visual categorization, we
present a Concept Drift and Long-Tailed Distribution dataset. Specifically, the
dataset is collected by gathering 11195 images of 250 instances in different
species for 47 consecutive months in their natural contexts. The collection
process involves dozens of crowd workers for photographing and domain experts
for labelling. Extensive baseline experiments using the state-of-the-art
fine-grained classification models demonstrate the issues of concept drift and
long-tailed distribution existed in the dataset, which require the attention of
future researches.
|
[
{
"created": "Sun, 4 Jun 2023 12:42:45 GMT",
"version": "v1"
}
] |
2023-06-06
|
[
[
"Ye",
"Shuo",
""
],
[
"Shi",
"Yufeng",
""
],
[
"Wang",
"Ruxin",
""
],
[
"Wang",
"Yu",
""
],
[
"Xu",
"Jiamiao",
""
],
[
"Yang",
"Chuanwu",
""
],
[
"You",
"Xinge",
""
]
] |
Data is the foundation for the development of computer vision, and the establishment of datasets plays an important role in advancing the techniques of fine-grained visual categorization~(FGVC). In the existing FGVC datasets used in computer vision, it is generally assumed that each collected instance has fixed characteristics and the distribution of different categories is relatively balanced. In contrast, the real world scenario reveals the fact that the characteristics of instances tend to vary with time and exhibit a long-tailed distribution. Hence, the collected datasets may mislead the optimization of the fine-grained classifiers, resulting in unpleasant performance in real applications. Starting from the real-world conditions and to promote the practical progress of fine-grained visual categorization, we present a Concept Drift and Long-Tailed Distribution dataset. Specifically, the dataset is collected by gathering 11195 images of 250 instances in different species for 47 consecutive months in their natural contexts. The collection process involves dozens of crowd workers for photographing and domain experts for labelling. Extensive baseline experiments using the state-of-the-art fine-grained classification models demonstrate the issues of concept drift and long-tailed distribution existed in the dataset, which require the attention of future researches.
|
2303.06545
|
Hao Zhou
|
Hao Zhou, Chongyang Zhang, Yanjun Chen, Chuanping Hu
|
Towards Diverse Temporal Grounding under Single Positive Labels
|
The source codes are available at
https://github.com/zhouhaocv/DTG-SPL
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal grounding aims to retrieve moments of the described event within an
untrimmed video by a language query. Typically, existing methods assume
annotations are precise and unique, yet one query may describe multiple moments
in many cases. Hence, simply taking it as a one-vs-one mapping task and
striving to match single-label annotations will inevitably introduce false
negatives during optimization. In this study, we reformulate this task as a
one-vs-many optimization problem under the condition of single positive labels.
The unlabeled moments are considered unobserved rather than negative, and we
explore mining potential positive moments to assist in multiple moment
retrieval. In this setting, we propose a novel Diverse Temporal Grounding
framework, termed DTG-SPL, which mainly consists of a positive moment
estimation (PME) module and a diverse moment regression (DMR) module. PME
leverages semantic reconstruction information and an expected positive
regularization to uncover potential positive moments in an online fashion.
Under the supervision of these pseudo positives, DMR is able to localize
diverse moments in parallel that meet different users. The entire framework
allows for end-to-end optimization as well as fast inference. Extensive
experiments on Charades-STA and ActivityNet Captions show that our method
achieves superior performance in terms of both single-label and multi-label
metrics.
|
[
{
"created": "Sun, 12 Mar 2023 02:54:18 GMT",
"version": "v1"
}
] |
2023-03-14
|
[
[
"Zhou",
"Hao",
""
],
[
"Zhang",
"Chongyang",
""
],
[
"Chen",
"Yanjun",
""
],
[
"Hu",
"Chuanping",
""
]
] |
Temporal grounding aims to retrieve moments of the described event within an untrimmed video by a language query. Typically, existing methods assume annotations are precise and unique, yet one query may describe multiple moments in many cases. Hence, simply taking it as a one-vs-one mapping task and striving to match single-label annotations will inevitably introduce false negatives during optimization. In this study, we reformulate this task as a one-vs-many optimization problem under the condition of single positive labels. The unlabeled moments are considered unobserved rather than negative, and we explore mining potential positive moments to assist in multiple moment retrieval. In this setting, we propose a novel Diverse Temporal Grounding framework, termed DTG-SPL, which mainly consists of a positive moment estimation (PME) module and a diverse moment regression (DMR) module. PME leverages semantic reconstruction information and an expected positive regularization to uncover potential positive moments in an online fashion. Under the supervision of these pseudo positives, DMR is able to localize diverse moments in parallel that meet different users. The entire framework allows for end-to-end optimization as well as fast inference. Extensive experiments on Charades-STA and ActivityNet Captions show that our method achieves superior performance in terms of both single-label and multi-label metrics.
|
2110.15919
|
Pranay Bhardwaj
|
Vinay U. Pai, Pranay Bhardwaj, and S. M. Zafaruddin
|
Performance Analysis of Dual-Hop THz Wireless Transmission for Backhaul
Applications
|
This paper has been accepted for presentation in 2021 IEEE
International Conference on Advanced Networks and Telecommunications Systems
(ANTS), Hyderabad, India
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
THz transmissions suffer from pointing errors due to antenna misalignment and
incur higher path loss from the molecular absorption in addition to the channel
fading. In this paper, we employ an amplify-and-forward (AF) dual-hop relaying
to mitigate the effect of pointing errors and extend the range of the THz
wireless system for backhaul connectivity. We provide statistical analysis on
the performance of the considered system by deriving analytical expressions for
the outage probability, average bit-error-rate (BER), average signal-to-noise
ratio (SNR), and a lower bound on the ergodic capacity over independent and
identical (i.i.d) $\alpha$-$\mu$ fading combined with the statistical effect of
pointing errors. Using computer simulations, we validate the derived analysis
of the relay-assisted system. We also demonstrate the effect of the system
parameters on outage probability and average BER with the help of diversity
order. We show that data rates up to several \mbox{Gbps} can be achieved using
THz transmissions, which is desirable for next-generation wireless systems,
especially for backhaul applications.
|
[
{
"created": "Fri, 29 Oct 2021 17:15:38 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Nov 2021 19:24:23 GMT",
"version": "v2"
}
] |
2021-11-23
|
[
[
"Pai",
"Vinay U.",
""
],
[
"Bhardwaj",
"Pranay",
""
],
[
"Zafaruddin",
"S. M.",
""
]
] |
THz transmissions suffer from pointing errors due to antenna misalignment and incur higher path loss from the molecular absorption in addition to the channel fading. In this paper, we employ an amplify-and-forward (AF) dual-hop relaying to mitigate the effect of pointing errors and extend the range of the THz wireless system for backhaul connectivity. We provide statistical analysis on the performance of the considered system by deriving analytical expressions for the outage probability, average bit-error-rate (BER), average signal-to-noise ratio (SNR), and a lower bound on the ergodic capacity over independent and identical (i.i.d) $\alpha$-$\mu$ fading combined with the statistical effect of pointing errors. Using computer simulations, we validate the derived analysis of the relay-assisted system. We also demonstrate the effect of the system parameters on outage probability and average BER with the help of diversity order. We show that data rates up to several \mbox{Gbps} can be achieved using THz transmissions, which is desirable for next-generation wireless systems, especially for backhaul applications.
|
2202.02071
|
Henrique Moniz
|
Afonso Oliveira, Henrique Moniz, Rodrigo Rodrigues
|
Alea-BFT: Practical Asynchronous Byzantine Fault Tolerance
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional Byzantine Fault Tolerance (BFT) state machine replication
protocols assume a partial synchrony model, leading to a design where a leader
replica drives the protocol and is replaced after a timeout. Recently, we
witnessed a surge of asynchronous BFT protocols that use randomization to
remove the assumptions of bounds on message delivery times, making them more
resilient to adverse network conditions. However, these protocols still fall
short of being practical across a broad range of scenarios due to their cubic
communication costs, use of expensive primitives, and overall protocol
complexity. In this paper, we present Alea-BFT, the first asynchronous BFT
protocol to achieve quadratic communication complexity, allowing it to scale to
large networks. Alea-BFT brings the key design insight from classical protocols
of concentrating part of the work on a single designated replica, and
incorporates this principle in a two stage pipelined design, with an efficient
broadcast led by the designated replica followed by an inexpensive binary
agreement. We evaluated our prototype implementation across 10 sites in 4
continents, and our results show significant scalability gains from the
proposed design.
|
[
{
"created": "Fri, 4 Feb 2022 10:53:37 GMT",
"version": "v1"
}
] |
2022-02-07
|
[
[
"Oliveira",
"Afonso",
""
],
[
"Moniz",
"Henrique",
""
],
[
"Rodrigues",
"Rodrigo",
""
]
] |
Traditional Byzantine Fault Tolerance (BFT) state machine replication protocols assume a partial synchrony model, leading to a design where a leader replica drives the protocol and is replaced after a timeout. Recently, we witnessed a surge of asynchronous BFT protocols that use randomization to remove the assumptions of bounds on message delivery times, making them more resilient to adverse network conditions. However, these protocols still fall short of being practical across a broad range of scenarios due to their cubic communication costs, use of expensive primitives, and overall protocol complexity. In this paper, we present Alea-BFT, the first asynchronous BFT protocol to achieve quadratic communication complexity, allowing it to scale to large networks. Alea-BFT brings the key design insight from classical protocols of concentrating part of the work on a single designated replica, and incorporates this principle in a two stage pipelined design, with an efficient broadcast led by the designated replica followed by an inexpensive binary agreement. We evaluated our prototype implementation across 10 sites in 4 continents, and our results show significant scalability gains from the proposed design.
|
1405.2066
|
Jehad Al Dallal Prof.
|
Jehad Al Dallal
|
How and When to Flatten Java Classes?
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Improving modularity and reusability are two key objectives in
object-oriented programming. These objectives are achieved by applying several
key concepts, such as data encapsulation and inheritance. A class in an
object-oriented system is the basic unit of design. Assessing the quality of an
object-oriented class may require flattening the class and representing it as
it really is, including all accessible inherited class members. Thus, class
flattening helps in exploring the impact of inheritance on improving code
quality. This paper explains how to flatten Java classes and discusses the
relationship between class flattening and some applications of interest to
software practitioners, such as refactoring and indicating external quality
attributes.
|
[
{
"created": "Thu, 8 May 2014 19:48:33 GMT",
"version": "v1"
}
] |
2014-05-09
|
[
[
"Dallal",
"Jehad Al",
""
]
] |
Improving modularity and reusability are two key objectives in object-oriented programming. These objectives are achieved by applying several key concepts, such as data encapsulation and inheritance. A class in an object-oriented system is the basic unit of design. Assessing the quality of an object-oriented class may require flattening the class and representing it as it really is, including all accessible inherited class members. Thus, class flattening helps in exploring the impact of inheritance on improving code quality. This paper explains how to flatten Java classes and discusses the relationship between class flattening and some applications of interest to software practitioners, such as refactoring and indicating external quality attributes.
|
1406.3583
|
Aaron D. Jaggard
|
Aaron D. Jaggard, Aaron Johnson, Paul Syverson, and Joan Feigenbaum
|
Representing Network Trust and Using It to Improve Anonymous
Communication
|
24 pages; talk to be presented at HotPETs 2014
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the effectiveness of correlation attacks against Tor, the
censorship arms race, and observations of malicious relays in Tor, we propose
that Tor users capture their trust in network elements using probability
distributions over the sets of elements observed by network adversaries. We
present a modular system that allows users to efficiently and conveniently
create such distributions and use them to improve their security. The major
components of this system are (i) an ontology of network-element types that
represents the main threats to and vulnerabilities of anonymous communication
over Tor, (ii) a formal language that allows users to naturally express trust
beliefs about network elements, and (iii) a conversion procedure that takes the
ontology, public information about the network, and user beliefs written in the
trust language and produce a Bayesian Belief Network that represents the
probability distribution in a way that is concise and easily sampleable. We
also present preliminary experimental results that show the distribution
produced by our system can improve security when employed by users; further
improvement is seen when the system is employed by both users and services.
|
[
{
"created": "Fri, 13 Jun 2014 16:23:12 GMT",
"version": "v1"
}
] |
2014-06-16
|
[
[
"Jaggard",
"Aaron D.",
""
],
[
"Johnson",
"Aaron",
""
],
[
"Syverson",
"Paul",
""
],
[
"Feigenbaum",
"Joan",
""
]
] |
Motivated by the effectiveness of correlation attacks against Tor, the censorship arms race, and observations of malicious relays in Tor, we propose that Tor users capture their trust in network elements using probability distributions over the sets of elements observed by network adversaries. We present a modular system that allows users to efficiently and conveniently create such distributions and use them to improve their security. The major components of this system are (i) an ontology of network-element types that represents the main threats to and vulnerabilities of anonymous communication over Tor, (ii) a formal language that allows users to naturally express trust beliefs about network elements, and (iii) a conversion procedure that takes the ontology, public information about the network, and user beliefs written in the trust language and produce a Bayesian Belief Network that represents the probability distribution in a way that is concise and easily sampleable. We also present preliminary experimental results that show the distribution produced by our system can improve security when employed by users; further improvement is seen when the system is employed by both users and services.
|
1702.06235
|
Will Radford
|
Andrew Chisholm, Will Radford, Ben Hachey
|
Learning to generate one-sentence biographies from Wikidata
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate the generation of one-sentence Wikipedia biographies from
facts derived from Wikidata slot-value pairs. We train a recurrent neural
network sequence-to-sequence model with attention to select facts and generate
textual summaries. Our model incorporates a novel secondary objective that
helps ensure it generates sentences that contain the input facts. The model
achieves a BLEU score of 41, improving significantly upon the vanilla
sequence-to-sequence model and scoring roughly twice that of a simple template
baseline. Human preference evaluation suggests the model is nearly as good as
the Wikipedia reference. Manual analysis explores content selection, suggesting
the model can trade the ability to infer knowledge against the risk of
hallucinating incorrect information.
|
[
{
"created": "Tue, 21 Feb 2017 01:30:59 GMT",
"version": "v1"
}
] |
2017-02-22
|
[
[
"Chisholm",
"Andrew",
""
],
[
"Radford",
"Will",
""
],
[
"Hachey",
"Ben",
""
]
] |
We investigate the generation of one-sentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.
|
2402.11173
|
Andrew Lowy
|
Andrew Lowy, Jonathan Ullman, Stephen J. Wright
|
How to Make the Gradients Small Privately: Improved Rates for
Differentially Private Non-Convex Optimization
| null | null | null | null |
cs.LG cs.CR math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
We provide a simple and flexible framework for designing differentially
private algorithms to find approximate stationary points of non-convex loss
functions. Our framework is based on using a private approximate risk minimizer
to "warm start" another private algorithm for finding stationary points. We use
this framework to obtain improved, and sometimes optimal, rates for several
classes of non-convex loss functions. First, we obtain improved rates for
finding stationary points of smooth non-convex empirical loss functions.
Second, we specialize to quasar-convex functions, which generalize star-convex
functions and arise in learning dynamical systems and training some neural
nets. We achieve the optimal rate for this class. Third, we give an optimal
algorithm for finding stationary points of functions satisfying the
Kurdyka-Lojasiewicz (KL) condition. For example, over-parameterized neural
networks often satisfy this condition. Fourth, we provide new state-of-the-art
rates for stationary points of non-convex population loss functions. Fifth, we
obtain improved rates for non-convex generalized linear models. A modification
of our algorithm achieves nearly the same rates for second-order stationary
points of functions with Lipschitz Hessian, improving over the previous
state-of-the-art for each of the above problems.
|
[
{
"created": "Sat, 17 Feb 2024 02:42:56 GMT",
"version": "v1"
}
] |
2024-02-20
|
[
[
"Lowy",
"Andrew",
""
],
[
"Ullman",
"Jonathan",
""
],
[
"Wright",
"Stephen J.",
""
]
] |
We provide a simple and flexible framework for designing differentially private algorithms to find approximate stationary points of non-convex loss functions. Our framework is based on using a private approximate risk minimizer to "warm start" another private algorithm for finding stationary points. We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions. First, we obtain improved rates for finding stationary points of smooth non-convex empirical loss functions. Second, we specialize to quasar-convex functions, which generalize star-convex functions and arise in learning dynamical systems and training some neural nets. We achieve the optimal rate for this class. Third, we give an optimal algorithm for finding stationary points of functions satisfying the Kurdyka-Lojasiewicz (KL) condition. For example, over-parameterized neural networks often satisfy this condition. Fourth, we provide new state-of-the-art rates for stationary points of non-convex population loss functions. Fifth, we obtain improved rates for non-convex generalized linear models. A modification of our algorithm achieves nearly the same rates for second-order stationary points of functions with Lipschitz Hessian, improving over the previous state-of-the-art for each of the above problems.
|
2401.11946
|
Jiajun Liu
|
Jiajun Liu, Lina Tan, Zhili Zhou, Yi Li, Peng Chen
|
A Dynamic YOLO-Based Sequence-Matching Model for Efficient Coverless
Image Steganography
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Many existing coverless steganography methods establish a mapping
relationship between cover images and hidden data. There exists an issue that
the number of images stored in the database grows exponentially as the
steganographic capacity rises. The need for a high steganographic capacity
makes it challenging to build an image database. To improve the image library
utilization and anti-attack capability of the steganography system, we present
an efficient coverless scheme based on dynamically matched substrings. YOLO is
employed for selecting optimal objects, and a mapping dictionary is established
between these objects and scrambling factors. With the aid of this dictionary,
each image is effectively assigned to a specific scrambling factor, which is
used to scramble the receiver's sequence key. To achieve sufficient
steganography capability based on a limited image library, all substrings of
the scrambled sequences hold the potential to hide data. After completing the
secret information matching, the ideal number of stego images will be obtained
from the database. According to experimental results, this technology
outperforms most previous works on data load, transmission security, and hiding
capacity. Under typical geometric attacks, it can recover 79.85\% of secret
information on average. Furthermore, only approximately 200 random images are
needed to meet a capacity of 19 bits per image.
|
[
{
"created": "Mon, 22 Jan 2024 13:35:27 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Liu",
"Jiajun",
""
],
[
"Tan",
"Lina",
""
],
[
"Zhou",
"Zhili",
""
],
[
"Li",
"Yi",
""
],
[
"Chen",
"Peng",
""
]
] |
Many existing coverless steganography methods establish a mapping relationship between cover images and hidden data. There exists an issue that the number of images stored in the database grows exponentially as the steganographic capacity rises. The need for a high steganographic capacity makes it challenging to build an image database. To improve the image library utilization and anti-attack capability of the steganography system, we present an efficient coverless scheme based on dynamically matched substrings. YOLO is employed for selecting optimal objects, and a mapping dictionary is established between these objects and scrambling factors. With the aid of this dictionary, each image is effectively assigned to a specific scrambling factor, which is used to scramble the receiver's sequence key. To achieve sufficient steganography capability based on a limited image library, all substrings of the scrambled sequences hold the potential to hide data. After completing the secret information matching, the ideal number of stego images will be obtained from the database. According to experimental results, this technology outperforms most previous works on data load, transmission security, and hiding capacity. Under typical geometric attacks, it can recover 79.85\% of secret information on average. Furthermore, only approximately 200 random images are needed to meet a capacity of 19 bits per image.
|
2203.10837
|
Marcos Faundez-Zanuy
|
K. L\'opez-de-Ipi\~na, Marcos Faundez-Zanuy, Jordi Sol\'e-Casals,
Fernando Zelarin, Pilar Calvo
|
Multi-class versus One-class classifier in spontaneous speech analysis
oriented to Alzheimer Disease diagnosis
|
10 pages, published in International Conference on NONLINEAR SPEECH
PROCESSING, NOLISP 2015 jointly organized with the 25th Italian Workshop on
Neural Networks, WIRN 2015, held at May 2015, Vietri sul Mare, Salerno, Italy
|
Recent Advances in Nonlinear Speech Processing. Smart Innovation,
Systems and Technologies, vol 48. Springer, Cham 2015
|
10.1007/978-3-319-28109-4_7
| null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Most of medical developments require the ability to identify samples that are
anomalous with respect to a target group or control group, in the sense they
could belong to a new, previously unseen class or are not class data. In this
case when there are not enough data to train two-class One-class classification
appear like an available solution. On the other hand non-linear approaches
could give very useful information. The aim of our project is to contribute to
earlier diagnosis of AD and better estimates of its severity by using automatic
analysis performed through new biomarkers extracted from speech signal. The
methods selected in this case are speech biomarkers oriented to Spontaneous
Speech and Emotional Response Analysis. In this approach One-class classifiers
and two-class classifiers are analyzed. The use of information about outlier
and Fractal Dimension features improves the system performance.
|
[
{
"created": "Mon, 21 Mar 2022 09:57:20 GMT",
"version": "v1"
}
] |
2022-03-22
|
[
[
"López-de-Ipiña",
"K.",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Solé-Casals",
"Jordi",
""
],
[
"Zelarin",
"Fernando",
""
],
[
"Calvo",
"Pilar",
""
]
] |
Most of medical developments require the ability to identify samples that are anomalous with respect to a target group or control group, in the sense they could belong to a new, previously unseen class or are not class data. In this case when there are not enough data to train two-class One-class classification appear like an available solution. On the other hand non-linear approaches could give very useful information. The aim of our project is to contribute to earlier diagnosis of AD and better estimates of its severity by using automatic analysis performed through new biomarkers extracted from speech signal. The methods selected in this case are speech biomarkers oriented to Spontaneous Speech and Emotional Response Analysis. In this approach One-class classifiers and two-class classifiers are analyzed. The use of information about outlier and Fractal Dimension features improves the system performance.
|
2408.06047
|
Xuanpu Zhang
|
Xuanpu Zhang and Dan Song and Pengxin Zhan and Qingguo Chen and Zhao
Xu and Weihua Luo and Kaifu Zhang and Anan Liu
|
BooW-VTON: Boosting In-the-Wild Virtual Try-On via Mask-Free Pseudo Data
Training
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-based virtual try-on is an increasingly popular and important task to
generate realistic try-on images of specific person. Existing methods always
employ an accurate mask to remove the original garment in the source image,
thus achieving realistic synthesized images in simple and conventional try-on
scenarios based on powerful diffusion model. Therefore, acquiring suitable mask
is vital to the try-on performance of these methods. However, obtaining precise
inpainting masks, especially for complex wild try-on data containing diverse
foreground occlusions and person poses, is not easy as Figure 1-Top shows. This
difficulty often results in poor performance in more practical and challenging
real-life scenarios, such as the selfie scene shown in Figure 1-Bottom. To this
end, we propose a novel training paradigm combined with an efficient data
augmentation method to acquire large-scale unpaired training data from wild
scenarios, thereby significantly facilitating the try-on performance of our
model without the need for additional inpainting masks. Besides, a try-on
localization loss is designed to localize a more accurate try-on area to obtain
more reasonable try-on results. It is noted that our method only needs the
reference cloth image, source pose image and source person image as input,
which is more cost-effective and user-friendly compared to existing methods.
Extensive qualitative and quantitative experiments have demonstrated superior
performance in wild scenarios with such a low-demand input.
|
[
{
"created": "Mon, 12 Aug 2024 10:39:59 GMT",
"version": "v1"
}
] |
2024-08-13
|
[
[
"Zhang",
"Xuanpu",
""
],
[
"Song",
"Dan",
""
],
[
"Zhan",
"Pengxin",
""
],
[
"Chen",
"Qingguo",
""
],
[
"Xu",
"Zhao",
""
],
[
"Luo",
"Weihua",
""
],
[
"Zhang",
"Kaifu",
""
],
[
"Liu",
"Anan",
""
]
] |
Image-based virtual try-on is an increasingly popular and important task to generate realistic try-on images of specific person. Existing methods always employ an accurate mask to remove the original garment in the source image, thus achieving realistic synthesized images in simple and conventional try-on scenarios based on powerful diffusion model. Therefore, acquiring suitable mask is vital to the try-on performance of these methods. However, obtaining precise inpainting masks, especially for complex wild try-on data containing diverse foreground occlusions and person poses, is not easy as Figure 1-Top shows. This difficulty often results in poor performance in more practical and challenging real-life scenarios, such as the selfie scene shown in Figure 1-Bottom. To this end, we propose a novel training paradigm combined with an efficient data augmentation method to acquire large-scale unpaired training data from wild scenarios, thereby significantly facilitating the try-on performance of our model without the need for additional inpainting masks. Besides, a try-on localization loss is designed to localize a more accurate try-on area to obtain more reasonable try-on results. It is noted that our method only needs the reference cloth image, source pose image and source person image as input, which is more cost-effective and user-friendly compared to existing methods. Extensive qualitative and quantitative experiments have demonstrated superior performance in wild scenarios with such a low-demand input.
|
1604.01186
|
EPTCS
|
Denis Firsov, Tarmo Uustalu, Niccol\`o Veltri
|
Variations on Noetherianness
|
In Proceedings MSFP 2016, arXiv:1604.00384
|
EPTCS 207, 2016, pp. 76-88
|
10.4204/EPTCS.207.4
| null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In constructive mathematics, several nonequivalent notions of finiteness
exist. In this paper, we continue the study of Noetherian sets in the
dependently typed setting of the Agda programming language. We want to say that
a set is Noetherian, if, when we are shown elements from it one after another,
we will sooner or later have seen some element twice. This idea can be made
precise in a number of ways. We explore the properties and connections of some
of the possible encodings. In particular, we show that certain implementations
imply decidable equality while others do not, and we construct counterexamples
in the latter case. Additionally, we explore the relation between
Noetherianness and other notions of finiteness.
|
[
{
"created": "Tue, 5 Apr 2016 09:04:13 GMT",
"version": "v1"
}
] |
2016-04-06
|
[
[
"Firsov",
"Denis",
""
],
[
"Uustalu",
"Tarmo",
""
],
[
"Veltri",
"Niccolò",
""
]
] |
In constructive mathematics, several nonequivalent notions of finiteness exist. In this paper, we continue the study of Noetherian sets in the dependently typed setting of the Agda programming language. We want to say that a set is Noetherian, if, when we are shown elements from it one after another, we will sooner or later have seen some element twice. This idea can be made precise in a number of ways. We explore the properties and connections of some of the possible encodings. In particular, we show that certain implementations imply decidable equality while others do not, and we construct counterexamples in the latter case. Additionally, we explore the relation between Noetherianness and other notions of finiteness.
|
2109.06612
|
Johanna Schmidt
|
Raphael Sahann, Torsten M\"oller, Johanna Schmidt
|
Histogram binning revisited with a focus on human perception
|
Accepted as short paper at VIS 2021. Supplemental material can be
found at https://github.com/johanna-schmidt/histogram-binning-revisited
|
Proceedings of VIS short papers 2021
| null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents a quantitative user study to evaluate how well users can
visually perceive the underlying data distribution from a histogram
representation. We used different sample and bin sizes and four different
distributions (uniform, normal, bimodal, and gamma). The study results confirm
that, in general, more bins correlate with fewer errors by the viewers.
However, upon a certain number of bins, the error rate cannot be improved by
adding more bins. By comparing our study results with the outcomes of existing
mathematical models for histogram binning (e.g., Sturges' formula, Scott's
normal reference rule, the Rice Rule, or Freedman-Diaconis' choice), we can see
that most of them overestimate the number of bins necessary to make the
distribution visible to a human viewer.
|
[
{
"created": "Tue, 14 Sep 2021 12:08:27 GMT",
"version": "v1"
}
] |
2021-09-15
|
[
[
"Sahann",
"Raphael",
""
],
[
"Möller",
"Torsten",
""
],
[
"Schmidt",
"Johanna",
""
]
] |
This paper presents a quantitative user study to evaluate how well users can visually perceive the underlying data distribution from a histogram representation. We used different sample and bin sizes and four different distributions (uniform, normal, bimodal, and gamma). The study results confirm that, in general, more bins correlate with fewer errors by the viewers. However, upon a certain number of bins, the error rate cannot be improved by adding more bins. By comparing our study results with the outcomes of existing mathematical models for histogram binning (e.g., Sturges' formula, Scott's normal reference rule, the Rice Rule, or Freedman-Diaconis' choice), we can see that most of them overestimate the number of bins necessary to make the distribution visible to a human viewer.
|
1802.09657
|
Dipankar Maity
|
Dipankar Maity, John S. Baras
|
Event-Triggered Controller Synthesis for Dynamical Systems with Temporal
Logic Constraints
| null | null | null | null |
cs.RO math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose an event-triggered con- trol framework for dynamical
systems with temporal logical constraints. Event-triggered control
methodologies have proven to be very efficient in reducing sensing,
communication and computation costs. When a continuous feedback control is re-
placed with an event-triggered strategy, the corresponding state trajectories
also differ. In a system with logical constraints, such small deviation in the
trajectory might lead to unsatisfiability of the logical constraints. In this
work, we develop an approach where we ensure that the event-triggered state
trajectory is confined within an tube of the ideal trajectory associated with
the continuous state feedback. At the same time, we will ensure satisfiability
of the logical constraints as well. Furthermore, we show that the proposed
method works for delayed systems as long as the delay is bounded by a certain
quantity.
|
[
{
"created": "Tue, 27 Feb 2018 00:17:41 GMT",
"version": "v1"
}
] |
2018-02-28
|
[
[
"Maity",
"Dipankar",
""
],
[
"Baras",
"John S.",
""
]
] |
In this work, we propose an event-triggered con- trol framework for dynamical systems with temporal logical constraints. Event-triggered control methodologies have proven to be very efficient in reducing sensing, communication and computation costs. When a continuous feedback control is re- placed with an event-triggered strategy, the corresponding state trajectories also differ. In a system with logical constraints, such small deviation in the trajectory might lead to unsatisfiability of the logical constraints. In this work, we develop an approach where we ensure that the event-triggered state trajectory is confined within an tube of the ideal trajectory associated with the continuous state feedback. At the same time, we will ensure satisfiability of the logical constraints as well. Furthermore, we show that the proposed method works for delayed systems as long as the delay is bounded by a certain quantity.
|
1911.03059
|
Chowdhury Rahman
|
Afra Anika, Md. Hasibur Rahman, Salekul Islam, Abu Shafin Mohammad
Mahdee Jameel and Chowdhury Rafeed Rahman
|
A Comprehensive Comparison of Machine Learning Based Methods Used in
Bengali Question Classification
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
QA classification system maps questions asked by humans to an appropriate
answer category. A sound question classification (QC) system model is the
pre-requisite of a sound QA system. This work demonstrates phases of assembling
a QA type classification model. We present a comprehensive comparison
(performance and computational complexity) among some machine learning based
approaches used in QC for Bengali language.
|
[
{
"created": "Fri, 8 Nov 2019 05:30:33 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Nov 2019 16:37:41 GMT",
"version": "v2"
}
] |
2019-11-20
|
[
[
"Anika",
"Afra",
""
],
[
"Rahman",
"Md. Hasibur",
""
],
[
"Islam",
"Salekul",
""
],
[
"Jameel",
"Abu Shafin Mohammad Mahdee",
""
],
[
"Rahman",
"Chowdhury Rafeed",
""
]
] |
QA classification system maps questions asked by humans to an appropriate answer category. A sound question classification (QC) system model is the pre-requisite of a sound QA system. This work demonstrates phases of assembling a QA type classification model. We present a comprehensive comparison (performance and computational complexity) among some machine learning based approaches used in QC for Bengali language.
|
2212.09448
|
Senem Tanberk PhD
|
Senem Tanberk, Mustafa Can
|
Smart Journey in Istanbul: A Mobile Application in Smart Cities for
Traffic Estimation by Harnessing Time Series
| null | null |
10.1109/ASYU58738.2023.10296669
| null |
cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent decades, mobile applications (apps) have gained enormous
popularity. Smart services for smart cities increasingly gain attention. The
main goal of the proposed research is to present a new AI-powered mobile
application on Istanbul's traffic congestion forecast by using traffic density
data. It addresses the research question by using time series approaches (LSTM,
Transformer, and XGBoost) based on past data over the traffic load dataset
combined with meteorological conditions. Analysis of simulation results on
predicted models will be discussed according to performance indicators such as
MAPE, MAE, and RMSE. And then, it was observed that the Transformer model made
the most accurate traffic prediction. The developed traffic forecasting
prototype is expected to be a starting point on future products for a mobile
application suitable for citizens' daily use.
|
[
{
"created": "Tue, 13 Dec 2022 12:10:52 GMT",
"version": "v1"
}
] |
2023-11-09
|
[
[
"Tanberk",
"Senem",
""
],
[
"Can",
"Mustafa",
""
]
] |
In recent decades, mobile applications (apps) have gained enormous popularity. Smart services for smart cities increasingly gain attention. The main goal of the proposed research is to present a new AI-powered mobile application on Istanbul's traffic congestion forecast by using traffic density data. It addresses the research question by using time series approaches (LSTM, Transformer, and XGBoost) based on past data over the traffic load dataset combined with meteorological conditions. Analysis of simulation results on predicted models will be discussed according to performance indicators such as MAPE, MAE, and RMSE. And then, it was observed that the Transformer model made the most accurate traffic prediction. The developed traffic forecasting prototype is expected to be a starting point on future products for a mobile application suitable for citizens' daily use.
|
2401.11150
|
Junxiao Shen Mr
|
Junxiao Shen, Xuhai Xu, Ran Tan, Amy Karlson, Evan Strasnick
|
Simultaneous Gesture Classification and Localization with an Automatic
Gesture Annotation Model
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training a real-time gesture recognition model heavily relies on annotated
data. However, manual data annotation is costly and demands substantial human
effort. In order to address this challenge, we propose a novel annotation model
that can automatically annotate gesture classes and identify their temporal
ranges. Our ablation study demonstrates that our annotation model design
surpasses the baseline in terms of both gesture classification accuracy (3-4\%
improvement) and localization accuracy (71-75\% improvement). We believe that
this annotation model has immense potential to improve the training of
downstream gesture recognition models using unlabeled datasets.
|
[
{
"created": "Sat, 20 Jan 2024 07:11:03 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Shen",
"Junxiao",
""
],
[
"Xu",
"Xuhai",
""
],
[
"Tan",
"Ran",
""
],
[
"Karlson",
"Amy",
""
],
[
"Strasnick",
"Evan",
""
]
] |
Training a real-time gesture recognition model heavily relies on annotated data. However, manual data annotation is costly and demands substantial human effort. In order to address this challenge, we propose a novel annotation model that can automatically annotate gesture classes and identify their temporal ranges. Our ablation study demonstrates that our annotation model design surpasses the baseline in terms of both gesture classification accuracy (3-4\% improvement) and localization accuracy (71-75\% improvement). We believe that this annotation model has immense potential to improve the training of downstream gesture recognition models using unlabeled datasets.
|
1005.0907
|
Rdv Ijcsis
|
Yasser M. Alginaih, Abdul Ahad Siddiqi
|
Multistage Hybrid Arabic/Indian Numeral OCR System
|
IEEE Publication format, International Journal of Computer Science
and Information Security, IJCSIS, Vol. 8 No. 1, April 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
The use of OCR in postal services is not yet universal and there are still
many countries that process mail sorting manually. Automated Arabic/Indian
numeral Optical Character Recognition (OCR) systems for Postal services are
being used in some countries, but still there are errors during the mail
sorting process, thus causing a reduction in efficiency. The need to
investigate fast and efficient recognition algorithms/systems is important so
as to correctly read the postal codes from mail addresses and to eliminate any
errors during the mail sorting stage. The objective of this study is to
recognize printed numerical postal codes from mail addresses. The proposed
system is a multistage hybrid system which consists of three different feature
extraction methods, i.e., binary, zoning, and fuzzy features, and three
different classifiers, i.e., Hamming Nets, Euclidean Distance, and Fuzzy Neural
Network Classifiers. The proposed system, systematically compares the
performance of each of these methods, and ensures that the numerals are
recognized correctly. Comprehensive results provide a very high recognition
rate, outperforming the other known developed methods in literature.
|
[
{
"created": "Thu, 6 May 2010 07:25:23 GMT",
"version": "v1"
}
] |
2010-05-07
|
[
[
"Alginaih",
"Yasser M.",
""
],
[
"Siddiqi",
"Abdul Ahad",
""
]
] |
The use of OCR in postal services is not yet universal and there are still many countries that process mail sorting manually. Automated Arabic/Indian numeral Optical Character Recognition (OCR) systems for Postal services are being used in some countries, but still there are errors during the mail sorting process, thus causing a reduction in efficiency. The need to investigate fast and efficient recognition algorithms/systems is important so as to correctly read the postal codes from mail addresses and to eliminate any errors during the mail sorting stage. The objective of this study is to recognize printed numerical postal codes from mail addresses. The proposed system is a multistage hybrid system which consists of three different feature extraction methods, i.e., binary, zoning, and fuzzy features, and three different classifiers, i.e., Hamming Nets, Euclidean Distance, and Fuzzy Neural Network Classifiers. The proposed system, systematically compares the performance of each of these methods, and ensures that the numerals are recognized correctly. Comprehensive results provide a very high recognition rate, outperforming the other known developed methods in literature.
|
2405.13937
|
Xingtong Yu
|
Xingtong Yu, Zhenghao Liu, Yuan Fang, Xinming Zhang
|
DyGPrompt: Learning Feature and Time Prompts on Dynamic Graphs
|
Under review
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic graphs are pervasive in the real world, modeling dynamic relations
between objects across various fields. For dynamic graph modeling, dynamic
graph neural networks (DGNNs) have emerged as a mainstream technique, which are
generally pre-trained on the link prediction task, leaving a significant gap
from the objectives of downstream tasks such as node classification. To bridge
the gap, prompt-based learning has gained traction on graphs. However, existing
efforts focus on static graphs, neglecting the evolution of dynamic graphs. In
this paper, we propose DyGPrompt, a novel pre-training and prompting framework
for dynamic graph modeling. First, we design dual prompts to address the gap in
both task objectives and dynamic variations across pre-training and downstream
tasks. Second, we recognize that node and time features mutually characterize
each other, and propose dual condition-nets to model the evolving node-time
patterns in downstream tasks. Finally, we thoroughly evaluate and analyze
DyGPrompt through extensive experiments on three public datasets.
|
[
{
"created": "Wed, 22 May 2024 19:10:24 GMT",
"version": "v1"
},
{
"created": "Sun, 26 May 2024 01:46:11 GMT",
"version": "v2"
},
{
"created": "Tue, 28 May 2024 10:07:29 GMT",
"version": "v3"
},
{
"created": "Tue, 2 Jul 2024 05:14:10 GMT",
"version": "v4"
},
{
"created": "Wed, 3 Jul 2024 02:06:07 GMT",
"version": "v5"
}
] |
2024-07-04
|
[
[
"Yu",
"Xingtong",
""
],
[
"Liu",
"Zhenghao",
""
],
[
"Fang",
"Yuan",
""
],
[
"Zhang",
"Xinming",
""
]
] |
Dynamic graphs are pervasive in the real world, modeling dynamic relations between objects across various fields. For dynamic graph modeling, dynamic graph neural networks (DGNNs) have emerged as a mainstream technique, which are generally pre-trained on the link prediction task, leaving a significant gap from the objectives of downstream tasks such as node classification. To bridge the gap, prompt-based learning has gained traction on graphs. However, existing efforts focus on static graphs, neglecting the evolution of dynamic graphs. In this paper, we propose DyGPrompt, a novel pre-training and prompting framework for dynamic graph modeling. First, we design dual prompts to address the gap in both task objectives and dynamic variations across pre-training and downstream tasks. Second, we recognize that node and time features mutually characterize each other, and propose dual condition-nets to model the evolving node-time patterns in downstream tasks. Finally, we thoroughly evaluate and analyze DyGPrompt through extensive experiments on three public datasets.
|
2208.02884
|
Nicolaas Kaashoek
|
Nicolaas Kaashoek and Robert Morris
|
CheckSync: Using Runtime-Integrated Checkpoints to Achieve High
Availability}
|
14 pages, 6 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CheckSync provides applications with high availability via runtime-integrated
checkpointing. This allows CheckSync to take checkpoints of a process running
in a memory-managed language (Go, for now), which can be resumed on another
machine after a failure. CheckSync uses the runtime to checkpoint only the
process' live memory, doing without requiring significant changes to
applications.
CheckSync maintains the ease of use provided by virtual machines for the
applications it supports without requiring that an entire virtual machine image
be snapshotted. Because CheckSync captures only the memory used by an
application, it produces checkpoints that are smaller (by an order of
magnitude) than virtual machine snapshots if the memory footprint of the
application is relatively small compared to the state of the rest of the
operating system. Additionally, when running go-cache, a popular in-memory
key/value store, CheckSync reduces throughput by only 12% compared to the 78%
throughput loss when using go-cache's snapshot functionality, the 45% loss when
using CRIU, and the 68% loss when using virtual machine live migration.
|
[
{
"created": "Thu, 4 Aug 2022 20:53:50 GMT",
"version": "v1"
}
] |
2022-08-08
|
[
[
"Kaashoek",
"Nicolaas",
""
],
[
"Morris",
"Robert",
""
]
] |
CheckSync provides applications with high availability via runtime-integrated checkpointing. This allows CheckSync to take checkpoints of a process running in a memory-managed language (Go, for now), which can be resumed on another machine after a failure. CheckSync uses the runtime to checkpoint only the process' live memory, doing without requiring significant changes to applications. CheckSync maintains the ease of use provided by virtual machines for the applications it supports without requiring that an entire virtual machine image be snapshotted. Because CheckSync captures only the memory used by an application, it produces checkpoints that are smaller (by an order of magnitude) than virtual machine snapshots if the memory footprint of the application is relatively small compared to the state of the rest of the operating system. Additionally, when running go-cache, a popular in-memory key/value store, CheckSync reduces throughput by only 12% compared to the 78% throughput loss when using go-cache's snapshot functionality, the 45% loss when using CRIU, and the 68% loss when using virtual machine live migration.
|
2303.13299
|
Avi Schwarzschild
|
Avi Schwarzschild, Max Cembalest, Karthik Rao, Keegan Hines, John
Dickerson
|
Reckoning with the Disagreement Problem: Explanation Consensus as a
Training Objective
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As neural networks increasingly make critical decisions in high-stakes
settings, monitoring and explaining their behavior in an understandable and
trustworthy manner is a necessity. One commonly used type of explainer is post
hoc feature attribution, a family of methods for giving each feature in an
input a score corresponding to its influence on a model's output. A major
limitation of this family of explainers in practice is that they can disagree
on which features are more important than others. Our contribution in this
paper is a method of training models with this disagreement problem in mind. We
do this by introducing a Post hoc Explainer Agreement Regularization (PEAR)
loss term alongside the standard term corresponding to accuracy, an additional
term that measures the difference in feature attribution between a pair of
explainers. We observe on three datasets that we can train a model with this
loss term to improve explanation consensus on unseen data, and see improved
consensus between explainers other than those used in the loss term. We examine
the trade-off between improved consensus and model performance. And finally, we
study the influence our method has on feature attribution explanations.
|
[
{
"created": "Thu, 23 Mar 2023 14:35:37 GMT",
"version": "v1"
}
] |
2023-03-24
|
[
[
"Schwarzschild",
"Avi",
""
],
[
"Cembalest",
"Max",
""
],
[
"Rao",
"Karthik",
""
],
[
"Hines",
"Keegan",
""
],
[
"Dickerson",
"John",
""
]
] |
As neural networks increasingly make critical decisions in high-stakes settings, monitoring and explaining their behavior in an understandable and trustworthy manner is a necessity. One commonly used type of explainer is post hoc feature attribution, a family of methods for giving each feature in an input a score corresponding to its influence on a model's output. A major limitation of this family of explainers in practice is that they can disagree on which features are more important than others. Our contribution in this paper is a method of training models with this disagreement problem in mind. We do this by introducing a Post hoc Explainer Agreement Regularization (PEAR) loss term alongside the standard term corresponding to accuracy, an additional term that measures the difference in feature attribution between a pair of explainers. We observe on three datasets that we can train a model with this loss term to improve explanation consensus on unseen data, and see improved consensus between explainers other than those used in the loss term. We examine the trade-off between improved consensus and model performance. And finally, we study the influence our method has on feature attribution explanations.
|
2311.11596
|
Yining Miao
|
Yining Miao, Nanlin Shi, Changxing Huang, Yonghao Song, Xiaogang Chen,
Yijun Wang, Xiaorong Gao
|
High-performance cVEP-BCI under minimal calibration
|
35 pages, 5 figures
| null | null | null |
cs.HC cs.IT eess.SP math.IT q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ultimate goal of brain-computer interfaces (BCIs) based on visual
modulation paradigms is to achieve high-speed performance without the burden of
extensive calibration. Code-modulated visual evoked potential-based BCIs
(cVEP-BCIs) modulated by broadband white noise (WN) offer various advantages,
including increased communication speed, expanded encoding target capabilities,
and enhanced coding flexibility. However, the complexity of the
spatial-temporal patterns under broadband stimuli necessitates extensive
calibration for effective target identification in cVEP-BCIs. Consequently, the
information transfer rate (ITR) of cVEP-BCI under limited calibration usually
stays around 100 bits per minute (bpm), significantly lagging behind
state-of-the-art steady-state visual evoked potential-based BCIs (SSVEP-BCIs),
which achieve rates above 200 bpm. To enhance the performance of cVEP-BCIs with
minimal calibration, we devised an efficient calibration stage involving a
brief single-target flickering, lasting less than a minute, to extract
generalizable spatial-temporal patterns. Leveraging the calibration data, we
developed two complementary methods to construct cVEP temporal patterns: the
linear modeling method based on the stimulus sequence and the transfer learning
techniques using cross-subject data. As a result, we achieved the highest ITR
of 250 bpm under a minute of calibration, which has been shown to be comparable
to the state-of-the-art SSVEP paradigms. In summary, our work significantly
improved the cVEP performance under few-shot learning, which is expected to
expand the practicality and usability of cVEP-BCIs.
|
[
{
"created": "Mon, 20 Nov 2023 08:20:51 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Miao",
"Yining",
""
],
[
"Shi",
"Nanlin",
""
],
[
"Huang",
"Changxing",
""
],
[
"Song",
"Yonghao",
""
],
[
"Chen",
"Xiaogang",
""
],
[
"Wang",
"Yijun",
""
],
[
"Gao",
"Xiaorong",
""
]
] |
The ultimate goal of brain-computer interfaces (BCIs) based on visual modulation paradigms is to achieve high-speed performance without the burden of extensive calibration. Code-modulated visual evoked potential-based BCIs (cVEP-BCIs) modulated by broadband white noise (WN) offer various advantages, including increased communication speed, expanded encoding target capabilities, and enhanced coding flexibility. However, the complexity of the spatial-temporal patterns under broadband stimuli necessitates extensive calibration for effective target identification in cVEP-BCIs. Consequently, the information transfer rate (ITR) of cVEP-BCI under limited calibration usually stays around 100 bits per minute (bpm), significantly lagging behind state-of-the-art steady-state visual evoked potential-based BCIs (SSVEP-BCIs), which achieve rates above 200 bpm. To enhance the performance of cVEP-BCIs with minimal calibration, we devised an efficient calibration stage involving a brief single-target flickering, lasting less than a minute, to extract generalizable spatial-temporal patterns. Leveraging the calibration data, we developed two complementary methods to construct cVEP temporal patterns: the linear modeling method based on the stimulus sequence and the transfer learning techniques using cross-subject data. As a result, we achieved the highest ITR of 250 bpm under a minute of calibration, which has been shown to be comparable to the state-of-the-art SSVEP paradigms. In summary, our work significantly improved the cVEP performance under few-shot learning, which is expected to expand the practicality and usability of cVEP-BCIs.
|
2204.08970
|
Zhihao Li
|
Zhihao Li, Si Yi, Zhan Ma
|
Rendering Nighttime Image Via Cascaded Color and Brightness Compensation
|
Accepted by NTIRE 2022 (CVPR Workshop)
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Image signal processing (ISP) is crucial for camera imaging, and neural
networks (NN) solutions are extensively deployed for daytime scenes. The lack
of sufficient nighttime image dataset and insights on nighttime illumination
characteristics poses a great challenge for high-quality rendering using
existing NN ISPs. To tackle it, we first built a high-resolution nighttime
RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert
professionals. Meanwhile, to best capture the characteristics of nighttime
illumination light sources, we develop the CBUnet, a two-stage NN ISP to
cascade the compensation of color and brightness attributes. Experiments show
that our method has better visual quality compared to traditional ISP pipeline,
and is ranked at the second place in the NTIRE 2022 Night Photography Rendering
Challenge for two tracks by respective People's and Professional Photographer's
choices. The code and relevant materials are avaiable on our website:
https://njuvision.github.io/CBUnet.
|
[
{
"created": "Tue, 19 Apr 2022 16:15:31 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Apr 2022 17:23:11 GMT",
"version": "v2"
}
] |
2022-04-22
|
[
[
"Li",
"Zhihao",
""
],
[
"Yi",
"Si",
""
],
[
"Ma",
"Zhan",
""
]
] |
Image signal processing (ISP) is crucial for camera imaging, and neural networks (NN) solutions are extensively deployed for daytime scenes. The lack of sufficient nighttime image dataset and insights on nighttime illumination characteristics poses a great challenge for high-quality rendering using existing NN ISPs. To tackle it, we first built a high-resolution nighttime RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert professionals. Meanwhile, to best capture the characteristics of nighttime illumination light sources, we develop the CBUnet, a two-stage NN ISP to cascade the compensation of color and brightness attributes. Experiments show that our method has better visual quality compared to traditional ISP pipeline, and is ranked at the second place in the NTIRE 2022 Night Photography Rendering Challenge for two tracks by respective People's and Professional Photographer's choices. The code and relevant materials are avaiable on our website: https://njuvision.github.io/CBUnet.
|
1702.05724
|
Daniel M\'endez Fern\'andez
|
Marco Kuhrmann and Daniel M\'endez Fern\'andez and Thomas Ternit\'e
|
On the Use of Variability Operations in the V-Modell XT Software Process
Line
|
Journal of Software: Evolution and Process, 2015
| null |
10.1002/smr.1751
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software process lines provide a systematic approach to develop and manage
software processes. It defines a reference process containing general process
assets, whereas a well-defined customization approach allows process engineers
to create new process variants, e.g., by extending or modifying process assets.
Variability operations are an instrument to realize flexibility by explicitly
declaring required modifications, which are applied to create a procedurally
generated company-specific process. However, little is known about which
variability operations are suitable in practice. In this article, we present a
study on the feasibility of variability operations to support the development
of software process lines in the context of the V-Modell XT. We analyze which
variability operations are defined and practically used. We provide an initial
catalog of variability operations as an improvement proposal for other process
models. Our findings show that 69 variability operation types are defined
across several metamodel versions of which, however, 25 remain unused. The
found variability operations allow for systematically modifying the content of
process model elements and the process documentation, and they allow for
altering the structure of a process model and its description. Furthermore, we
also find that variability operations can help process engineers to compensate
process metamodel evolution.
|
[
{
"created": "Sun, 19 Feb 2017 08:58:12 GMT",
"version": "v1"
}
] |
2017-02-21
|
[
[
"Kuhrmann",
"Marco",
""
],
[
"Fernández",
"Daniel Méndez",
""
],
[
"Ternité",
"Thomas",
""
]
] |
Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying process assets. Variability operations are an instrument to realize flexibility by explicitly declaring required modifications, which are applied to create a procedurally generated company-specific process. However, little is known about which variability operations are suitable in practice. In this article, we present a study on the feasibility of variability operations to support the development of software process lines in the context of the V-Modell XT. We analyze which variability operations are defined and practically used. We provide an initial catalog of variability operations as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process documentation, and they allow for altering the structure of a process model and its description. Furthermore, we also find that variability operations can help process engineers to compensate process metamodel evolution.
|
1711.11017
|
Ethan Perez
|
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca
Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville
|
HoME: a Household Multimodal Environment
|
Presented at NIPS 2017's Visually-Grounded Interaction and Language
Workshop
| null | null | null |
cs.AI cs.CL cs.CV cs.RO cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce HoME: a Household Multimodal Environment for artificial agents
to learn from vision, audio, semantics, physics, and interaction with objects
and other agents, all within a realistic context. HoME integrates over 45,000
diverse 3D house layouts based on the SUNCG dataset, a scale which may
facilitate learning, generalization, and transfer. HoME is an open-source,
OpenAI Gym-compatible platform extensible to tasks in reinforcement learning,
language grounding, sound-based navigation, robotics, multi-agent learning, and
more. We hope HoME better enables artificial agents to learn as humans do: in
an interactive, multimodal, and richly contextualized setting.
|
[
{
"created": "Wed, 29 Nov 2017 18:45:59 GMT",
"version": "v1"
}
] |
2017-11-30
|
[
[
"Brodeur",
"Simon",
""
],
[
"Perez",
"Ethan",
""
],
[
"Anand",
"Ankesh",
""
],
[
"Golemo",
"Florian",
""
],
[
"Celotti",
"Luca",
""
],
[
"Strub",
"Florian",
""
],
[
"Rouat",
"Jean",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] |
We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.
|
1709.03421
|
Riccardo Sven Risuleo
|
Riccardo Sven Risuleo, Giulio Bottegal, H{\aa}kan Hjalmarsson
|
Modeling and identification of uncertain-input systems
|
27 Pages, submitted to Automatica
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a new class of models, called uncertain-input
models, that allows us to treat system-identification problems in which a
linear system is subject to a partially unknown input signal. To encode prior
information about the input or the linear system, we use Gaussian-process
models. We estimate the model from data using the empirical Bayes approach: the
input and the impulse responses of the linear system are estimated using the
posterior means of the Gaussian-process models given the data, and the
hyperparameters that characterize the Gaussian-process models are estimated
from the marginal likelihood of the data. We propose an iterative algorithm to
find the hyperparameters that relies on the EM method and results in simple
update steps. In the most general formulation, neither the marginal likelihood
nor the posterior distribution of the unknowns is tractable. Therefore, we
propose two approximation approaches, one based on Markov-chain Monte Carlo
techniques and one based on variational Bayes approximation. We also show
special model structures for which the distributions are treatable exactly.
Through numerical simulations, we study the application of the uncertain-input
model to the identification of Hammerstein systems and cascaded linear systems.
As part of the contribution of the paper, we show that this model structure
encompasses many classical problems in system identification such as classical
PEM, Hammerstein models, errors-in-variables problems, blind system
identification, and cascaded linear systems. This allows us to build a
systematic procedure to apply the algorithms proposed in this work to a wide
class of classical problems.
|
[
{
"created": "Mon, 11 Sep 2017 14:53:38 GMT",
"version": "v1"
}
] |
2017-09-12
|
[
[
"Risuleo",
"Riccardo Sven",
""
],
[
"Bottegal",
"Giulio",
""
],
[
"Hjalmarsson",
"Håkan",
""
]
] |
In this work, we present a new class of models, called uncertain-input models, that allows us to treat system-identification problems in which a linear system is subject to a partially unknown input signal. To encode prior information about the input or the linear system, we use Gaussian-process models. We estimate the model from data using the empirical Bayes approach: the input and the impulse responses of the linear system are estimated using the posterior means of the Gaussian-process models given the data, and the hyperparameters that characterize the Gaussian-process models are estimated from the marginal likelihood of the data. We propose an iterative algorithm to find the hyperparameters that relies on the EM method and results in simple update steps. In the most general formulation, neither the marginal likelihood nor the posterior distribution of the unknowns is tractable. Therefore, we propose two approximation approaches, one based on Markov-chain Monte Carlo techniques and one based on variational Bayes approximation. We also show special model structures for which the distributions are treatable exactly. Through numerical simulations, we study the application of the uncertain-input model to the identification of Hammerstein systems and cascaded linear systems. As part of the contribution of the paper, we show that this model structure encompasses many classical problems in system identification such as classical PEM, Hammerstein models, errors-in-variables problems, blind system identification, and cascaded linear systems. This allows us to build a systematic procedure to apply the algorithms proposed in this work to a wide class of classical problems.
|
2110.04678
|
Ankit Parag Shah
|
Rita Singh, Ankit Shah, Hira Dhamyal
|
An Overview of Techniques for Biomarker Discovery in Voice Signal
|
Last two authors contributed equally to the paper
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper reflects on the effect of several categories of medical conditions
on human voice, focusing on those that may be hypothesized to have effects on
voice, but for which the changes themselves may be subtle enough to have eluded
observation in standard analytical examinations of the voice signal. It
presents three categories of techniques that can potentially uncover such
elusive biomarkers and allow them to be measured and used for predictive and
diagnostic purposes. These approaches include proxy techniques, model-based
analytical techniques and data-driven AI techniques.
|
[
{
"created": "Sun, 10 Oct 2021 01:39:28 GMT",
"version": "v1"
}
] |
2021-10-12
|
[
[
"Singh",
"Rita",
""
],
[
"Shah",
"Ankit",
""
],
[
"Dhamyal",
"Hira",
""
]
] |
This paper reflects on the effect of several categories of medical conditions on human voice, focusing on those that may be hypothesized to have effects on voice, but for which the changes themselves may be subtle enough to have eluded observation in standard analytical examinations of the voice signal. It presents three categories of techniques that can potentially uncover such elusive biomarkers and allow them to be measured and used for predictive and diagnostic purposes. These approaches include proxy techniques, model-based analytical techniques and data-driven AI techniques.
|
2308.01286
|
Diptapriyo Majumdar
|
Diptapriyo Majumdar
|
Enumeration Kernels of Polynomial Size for Cuts of Bounded Degree
|
There have been major revision in the technicalities and proofs of
the paper
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enumeration kernelization was first proposed by Creignou et al. [TOCS 2017]
and was later refined by Golovach et al. [JCSS 2022] into two different
variants: fully-polynomial enumeration kernelization and polynomial-delay
enumeration kernelization. In this paper, we consider the DEGREE-d-CUT problem
from the perspective of (polynomial-delay) enumeration kenrelization. Given an
undirected graph G = (V, E), a cut F = (A, B) is a degree-d-cut of G if every
$u \in A$ has at most d neighbors in B and every $v \in B$ has at most d
neighbors in A. Checking the existence of a degree-d-cut in a graph is a
well-known NP-hard problem and is well-studied in parameterized complexity
[Algorithmica 2021, IWOCA 2021]. This problem also generalizes a well-studied
problem MATCHING CUT (set d = 1) that has been a central problem in the
literature of polynomial-delay enumeration kernelization. In this paper, we
study three different enumeration variants of this problem, ENUM DEGREE-d-CUT,
ENUM MIN-DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT that intends to enumerate all
the d-cuts, all the minimal d-cuts and all the maximal degree-d-cuts
respectively. We consider various structural parameters of the input and for
every fixed $d \geq 1$, we provide polynomial-delay enumeration kernelizations
of polynomial size for ENUM DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT and
fully-polynomial enumeration kernels of polynomial size for ENUM
MIN-DEGREE-d-CUT.
|
[
{
"created": "Wed, 2 Aug 2023 17:18:19 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2023 09:14:39 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Dec 2023 07:32:28 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Feb 2024 18:15:47 GMT",
"version": "v4"
},
{
"created": "Sat, 27 Apr 2024 17:42:26 GMT",
"version": "v5"
}
] |
2024-04-30
|
[
[
"Majumdar",
"Diptapriyo",
""
]
] |
Enumeration kernelization was first proposed by Creignou et al. [TOCS 2017] and was later refined by Golovach et al. [JCSS 2022] into two different variants: fully-polynomial enumeration kernelization and polynomial-delay enumeration kernelization. In this paper, we consider the DEGREE-d-CUT problem from the perspective of (polynomial-delay) enumeration kenrelization. Given an undirected graph G = (V, E), a cut F = (A, B) is a degree-d-cut of G if every $u \in A$ has at most d neighbors in B and every $v \in B$ has at most d neighbors in A. Checking the existence of a degree-d-cut in a graph is a well-known NP-hard problem and is well-studied in parameterized complexity [Algorithmica 2021, IWOCA 2021]. This problem also generalizes a well-studied problem MATCHING CUT (set d = 1) that has been a central problem in the literature of polynomial-delay enumeration kernelization. In this paper, we study three different enumeration variants of this problem, ENUM DEGREE-d-CUT, ENUM MIN-DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT that intends to enumerate all the d-cuts, all the minimal d-cuts and all the maximal degree-d-cuts respectively. We consider various structural parameters of the input and for every fixed $d \geq 1$, we provide polynomial-delay enumeration kernelizations of polynomial size for ENUM DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT and fully-polynomial enumeration kernels of polynomial size for ENUM MIN-DEGREE-d-CUT.
|
0810.2529
|
Alireza Bayesteh
|
Jamshid Abouei, Alireza Bayesteh, Masoud Ebrahimi, and Amir K.
Khandani
|
On the Throughput Maximization in Dencentralized Wireless Networks
|
Submitted to IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A distributed single-hop wireless network with $K$ links is considered, where
the links are partitioned into a fixed number ($M$) of clusters each operating
in a subchannel with bandwidth $\frac{W}{M}$. The subchannels are assumed to be
orthogonal to each other. A general shadow-fading model, described by
parameters $(\alpha,\varpi)$, is considered where $\alpha$ denotes the
probability of shadowing and $\varpi$ ($\varpi \leq 1$) represents the average
cross-link gains. The main goal of this paper is to find the maximum network
throughput in the asymptotic regime of $K \to \infty$, which is achieved by: i)
proposing a distributed and non-iterative power allocation strategy, where the
objective of each user is to maximize its best estimate (based on its local
information, i.e., direct channel gain) of the average network throughput, and
ii) choosing the optimum value for $M$. In the first part of the paper, the
network hroughput is defined as the \textit{average sum-rate} of the network,
which is shown to scale as $\Theta (\log K)$. Moreover, it is proved that in
the strong interference scenario, the optimum power allocation strategy for
each user is a threshold-based on-off scheme. In the second part, the network
throughput is defined as the \textit{guaranteed sum-rate}, when the outage
probability approaches zero. In this scenario, it is demonstrated that the
on-off power allocation scheme maximizes the throughput, which scales as
$\frac{W}{\alpha \varpi} \log K$. Moreover, the optimum spectrum sharing for
maximizing the average sum-rate and the guaranteed sum-rate is achieved at M=1.
|
[
{
"created": "Tue, 14 Oct 2008 19:40:22 GMT",
"version": "v1"
}
] |
2008-10-15
|
[
[
"Abouei",
"Jamshid",
""
],
[
"Bayesteh",
"Alireza",
""
],
[
"Ebrahimi",
"Masoud",
""
],
[
"Khandani",
"Amir K.",
""
]
] |
A distributed single-hop wireless network with $K$ links is considered, where the links are partitioned into a fixed number ($M$) of clusters each operating in a subchannel with bandwidth $\frac{W}{M}$. The subchannels are assumed to be orthogonal to each other. A general shadow-fading model, described by parameters $(\alpha,\varpi)$, is considered where $\alpha$ denotes the probability of shadowing and $\varpi$ ($\varpi \leq 1$) represents the average cross-link gains. The main goal of this paper is to find the maximum network throughput in the asymptotic regime of $K \to \infty$, which is achieved by: i) proposing a distributed and non-iterative power allocation strategy, where the objective of each user is to maximize its best estimate (based on its local information, i.e., direct channel gain) of the average network throughput, and ii) choosing the optimum value for $M$. In the first part of the paper, the network hroughput is defined as the \textit{average sum-rate} of the network, which is shown to scale as $\Theta (\log K)$. Moreover, it is proved that in the strong interference scenario, the optimum power allocation strategy for each user is a threshold-based on-off scheme. In the second part, the network throughput is defined as the \textit{guaranteed sum-rate}, when the outage probability approaches zero. In this scenario, it is demonstrated that the on-off power allocation scheme maximizes the throughput, which scales as $\frac{W}{\alpha \varpi} \log K$. Moreover, the optimum spectrum sharing for maximizing the average sum-rate and the guaranteed sum-rate is achieved at M=1.
|
2010.08660
|
Manas Gaur
|
Manas Gaur, Keyur Faldu, Amit Sheth
|
Semantics of the Black-Box: Can knowledge graphs help make deep learning
systems more interpretable and explainable?
|
6 pages + references, 4 figures, Accepted to IEEE internet computing
2020
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The recent series of innovations in deep learning (DL) have shown enormous
potential to impact individuals and society, both positively and negatively.
The DL models utilizing massive computing power and enormous datasets have
significantly outperformed prior historical benchmarks on increasingly
difficult, well-defined research tasks across technology domains such as
computer vision, natural language processing, signal processing, and
human-computer interactions. However, the Black-Box nature of DL models and
their over-reliance on massive amounts of data condensed into labels and dense
representations poses challenges for interpretability and explainability of the
system. Furthermore, DLs have not yet been proven in their ability to
effectively utilize relevant domain knowledge and experience critical to human
understanding. This aspect is missing in early data-focused approaches and
necessitated knowledge-infused learning and other strategies to incorporate
computational knowledge. This article demonstrates how knowledge, provided as a
knowledge graph, is incorporated into DL methods using knowledge-infused
learning, which is one of the strategies. We then discuss how this makes a
fundamental difference in the interpretability and explainability of current
approaches, and illustrate it with examples from natural language processing
for healthcare and education applications.
|
[
{
"created": "Fri, 16 Oct 2020 22:55:23 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Nov 2020 02:28:43 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Nov 2020 15:52:55 GMT",
"version": "v3"
},
{
"created": "Fri, 11 Dec 2020 23:03:11 GMT",
"version": "v4"
}
] |
2020-12-15
|
[
[
"Gaur",
"Manas",
""
],
[
"Faldu",
"Keyur",
""
],
[
"Sheth",
"Amit",
""
]
] |
The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications.
|