The last decade has witnessed rapid development of diverse emerging wireless systems such as the thirdgeneration (3G) and beyond wide-area cellular networks, municipal area networks and local area hotspots. It is widely envisioned that these coexisting heterogeneous networks could complement each other in capacity and coverage and would converge into a common platform over the Internet Protocol (IP). In line with the converged networks, user devices equipped with multiple network interfaces are being shipped. Consequently, a nomadic customer with such a multihomed device would expect to experience improved quality of service (QoS) by optimally utilising available access networks surrounding him/her on the move, and thus to be always best connected (ABC). This paper reviews recent patents, together with related standards and typical academic proposals, on supporting multihoming towards enabling the ABC service. Multihoming protocols being standardised are investigated first, following an introduction to heterogeneous and converged networks and essential multihoming and ABC concepts. Subsequently, a survey is performed on the two core components in multihoming ABC paradigm: intelligent network selection and multihoming handover with both concurrent communication via multiple interfaces and inter-system handover schemes covered. Finally, views on current and future development are provided.
We propose adding users Media Access Control (MAC) addresses to standard X.509 certificates to provide more secure authentication. Recent patents demonstrate efforts on a X.509 certificate by adding security features in order to establish secure communications. The MAC address can be added by the issuing Certification Authority (CA) to the “extensions” section of the X.509 certificate. We demonstrate that when two users with MAC address information on their digital certificates communicate, the MAC address on the first users certificate can be easily verified by the second user. In this way, security can be improved without markedly degrading system performance and the level of initial trust between participants in virtual communities can be enhanced.
This paper deals with the question: what are the implications of connectionism for theories of computation? Three possible answers are examined. 1. A theory of implementation: connectionist representations are not semantically structured. Connectionism can be deemed a theory for implementing classic symbolic computation. 2. Representational connectionist computation: connectionist networks compute by exploiting relations of structural resemblance between their connection weights and their target domains. An adequate representational theory of computation will also explain connectionist computation. 3. Non-representational connectionist computation: connectionism need not be committed to internal representations. An adequate non-representational theory of computation could account for connectionist computation. To some, connectionism seems like a promising alternative to the classical computational theory of mind. This debate will not be pursued directly in this paper. Rather, by critiquing the answers above, it will be examined whether connectionist networks (or neural networks) compute and if so, how they compute1 . This will also be explored by reviewing some examples of neural networks including some recent patents (e.g., colour categorisation network, NETtalk, pattern recognition networks etc.). Moreover, I argue that an important question that should be asked is whether connectionist computation qualifies as digital or analogue computation.
Customer churn prediction is one of the most important problems in customer relationship management (CRM). Its aim is to retain valuable customers to maximize the profit of a company. To predict whether a customer will be a churner or non-churner, there are a number of data mining techniques applied for churn prediction, such as artificial neural networks, decision trees, and support vector machines. This paper reviews some recent patents along with 21 related studies published from 2000 to 2009 and compares them in terms of the domain dataset used, data pre-processing and prediction techniques considered, etc. Future research issues are discussed.
Model order selection of an Autoregressive Moving Average (ARMA) process is an important problem. This paper presents a new algorithm for the estimation of an ARMA and autoregressive with exogenous input (ARX) model orders based on a rounding approach which uses the floor and the ceiling functions. The rounding approach is implemented to deal with the precision of binary words. The proposed algorithm is based on selecting a sequence of pivot cells from an MEV matrix which is based on the minimum eigenvalue of a covariance matrix computed from the observed data. It searches for the corner that contains the estimates of the true orders using the floor and the ceiling functions of the pivot cell values and the values of its neighbors. The proposed algorithm is an expansion of the algorithm proposed by Liang et al. (IEEE Transaction on Signal Processing, 1993; 41(10): 3003-3009). Recent patents and research advances aim to apply eigenvalue decomposition in estimation and prediction. Among the patents discussed is a method that describes estimation of uncertainty of a measuring machine where covariance matrix is subjected to eigenvalue decomposition.
Among all the proposed techniques for software testing, the model-based testing technique has gained attention with the popularization of models in software design and development. Of particular importance are formal models with precise semantics. However, the most researches argue that building an abstract formal model for the system under test requires excessive amount of skills from the tester and a large effort in terms of man-hours needed. To cope with such drawbacks, the testing approach of agent interaction protocol, proposed so far, translates automatically a semi-formal specification described by means of AUML sequence diagram into a Recursive Colored Petri Net (RCPN) model. The obtained RCPN is then animated in order to obtain its corresponding reachability graph. Every sequence path of this reachability graph is considered as an abstract test case. Those ones are then concretized and addressed to the system under test. The responses of the system are, finally, compared to the expected results derived from the abstract test model. As a case study, we selected the FIPA Brokering Interaction Protocol. Here, some patent literature has also been discussed.
Many countries around the world are all striving for establishing pervasive community care services. To provide more responsive and personalized care services, this research aims to develop a “Smart Community Care System (SCCS)” based on the concept of Ambient Intelligence using Radio Frequency Identification (RFID) and Mobile Agent (MA) technologies. Patent US20080294896 which provides a method for transmitting and receiving users personal information using agent technologies can be useful for SCCS to provide customized services. Caregivers may locate carereceivers easily in a community with RFID while MA furnishes timely and accurate information for the care provision. The care historical data can be automatically recorded as well. SCCS is expected to be able to promote the pervasive community care services with convenience, context awareness and accuracy. Patent US7529685 which furnishes a system for storing and retrieving patient data in a database connected to a network can be integrated with the proposed SCCS for improving the efficiency of patient data processing.
Software piracy is the major challenge to software providers, and most trade organizations today face problems with software piracy. As a result, there are several developed systems available in the market to deal with this problem. Unfortunately, the majority of these systems do not provide an appropriate solution and the problem has not been solved yet. After reviewing the nature of existing systems, the author found that these systems do not take in their consideration the international standard specifications to treat this problem. Therefore, it is difficult for these systems to prevent or stop piracy. Thus the purpose of this paper is to develop a new scheme carrying the characteristics of international standard specifications in order to be able to prevent piracy in any country by utilizing the Internet and Web services, by using one from the deterministic public key encryption scheme, namely ElGamal scheme, and by using zero knowledge proof of identity technique to grant the users access to a scheme correctly and also to use the international standard copy number, to ease many of these difficulties. Results are given from which the conclusion drawn is that developing a new scheme entitled “An Efficient Software Anti-Piracy Scheme” can help significantly in the work of trade organizations and software providers suffering from software piracy. Some recent patents are also discussed in this paper.