JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS VOLUME 8 2010 0649913201, 0644701007


232 43 13MB

en Pages [686]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
ISSUE--1---JCAAM--VOL-8--2010.pdf
START--1--JCAAM-VOL-8--2010.pdf
frontJCAAMv-8-1-10.pdf
SCOPE OF THE JCAAM-v-8-2010.pdf
JCAAM-ASSOC--ED--VOL8-2010.pdf
EDITOR--amat--jcaam--note.pdf
Binder--I--1-22--2010.pdf
1-ACOSTA.pdf
2-AKMEL-GODEFROY.pdf
3-ANASTASSIOU-AMAT08-CaputoFractMOpial.pdf
4-Arciero_AMAT_08.pdf
5-ARREDONDO.pdf
6-Bernstein-AMAT2008-3-18-09.pdf
7-BORIS-SHEKHTMAN-AMAT08---3-18-09.pdf
8-CAVIEZEL-AMAT-2008.pdf
9-DAVID-ROACH--AMAT08--JCAAM-1-7-10.pdf
JCAAM-VOL-8--2010--Instructions-to-Contributors.pdf
TABLE OF CONTENTS_JCAAM-VOL-8--2010--issue_1.pdf
ISSUE--2---JCAAM--VOL-8--2010.pdf
BOOK-2.pdf
START--2--JCAAM-VOL-8--2010.pdf
frontJCAAMv-8-2-10.pdf
SCOPE OF THE JCAAM-v-8-2010.pdf
JCAAM-ASSOC--ED--VOL8-2010.pdf
EDITOR--amat--jcaam--note.pdf
Binder--II--1-22--2010.pdf
10-GAL-BEDE.pdf
11-Ganzburg-AMAT08.pdf
12-Govil-Tariq-F-AMAT08-3-11-09.pdf
13-GROSSMAN-AMAT08-3-11-09.pdf
14-guerekata.PDF
15-Iuliana_Iatan.pdf
16-Kalmykov-AMAT-2008.pdf
17-kats_amat08-11-20-08.pdf
18-kuzibaev-11-26-08.pdf
19-Lai09.pdf
20-lesmes-milton-AMAT08-3-11-09.pdf
21-moyos-amat-2008---3-18-09.pdf
22-Neygebauer-AMAT2008.pdf
23-N-Greene-AMAT08---3-18-09.pdf
24-PAVLIKA.pdf
JCAAM-VOL-8--2010--Instructions-to-Contributors.pdf
TABLE OF CONTENTS_JCAAM-VOL-8--2010--issue_2.pdf
ISSUE--3---JCAAM--VOL-8--2010.pdf
BOOK-3.pdf
START--3--JCAAM-VOL-8--2010.pdf
frontJCAAMv-8-3-10.pdf
SCOPE OF THE JCAAM-v-8-2010.pdf
JCAAM-ASSOC--ED--VOL8-2010.pdf
EDITOR--amat--jcaam--note.pdf
Binder--III--1-22--2010.pdf
25-rajmic-amat08-----3-18-09.pdf
26-SAFEER-1-26-09.pdf
27-San-Antolin.pdf
28-SETIA-12-5-08-AMAT-PROC.pdf
29-skrzypek-amat08----3-18-09.pdf
30-TAMIR-NAVE-amat08.pdf
31-T-Filippova-AMAT-2008-----3-18-09.pdf
32-TOMOAKI-OKAYAMA-AMAT-08-7-7-2009.pdf
33-T-X-HE-AMAT2008.pdf
34-W-HAN-AMAT08.pdf
35-WYChan.pdf
36-Yufang_Hao-AMAT-08.pdf
JCAAM-VOL-8--2010--Instructions-to-Contributors.pdf
BLANK-JCAAM-2010.pdf
TABLE OF CONTENTS_JCAAM-VOL-8--2010--issue_3.pdf
ISSUE--4---JCAAM--VOL-8--2010.pdf
BOOK-4.pdf
START--4--JCAAM-VOL-8--2010.pdf
frontJCAAMv-8-4-10.pdf
SCOPE OF THE JCAAM-v-8-2010.pdf
JCAAM-ASSOC--ED--VOL8-2010.pdf
Binder--IV--1-23--2010.pdf
37-AMINATAEI-JCAAM-7-6-2009.pdf
38-A-SULAIMAN-2-24-09.pdf
39-B-SULAIMAN-2-24-09.pdf
40-C-SULAIMAN-2-24-09.pdf
41-Grossman-Zeleke-Zhu---JCAAM--11-2-2009.pdf
42-ryoocs-3-5-09-jcaam.pdf
43-TIWARI-9-19-08.pdf
44-GANZBURG-2-16-09.pdf
45-XIANG-CHEN-ZHAO-8-21-08jcaam.pdf
46-YISHENG-SONG-8-22-08.pdf
47-gokhan7-18-08.pdf
48-IVAN-STRASKRABA-JCAAM-6-25-09.pdf
JCAAM-VOL-8--2010--Instructions-to-Contributors.pdf
TABLE OF CONTENTS_JCAAM-VOL-8--2010--issue_4.pdf
Recommend Papers

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS VOLUME 8  2010
 0649913201, 0644701007

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

VOLUME 8, NUMBER 1

JANUARY 2010

ISSN:1548-5390 PRINT,1559-176X ONLINE

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS

SPECIAL ISSUE I :APPLIED MATHEMATICS AND APPROXIMATION THEORY EUDOXUS PRESS,LLC

2

SCOPE AND PRICES OF THE JOURNAL Journal of Concrete and Applicable Mathematics A quartely international publication of Eudoxus Press,LLC Editor in Chief: George Anastassiou Department of Mathematical Sciences, University of Memphis Memphis, TN 38152, U.S.A. [email protected] The main purpose of the "Journal of Concrete and Applicable Mathematics" is to publish high quality original research articles from all subareas of Non-Pure and/or Applicable Mathematics and its many real life applications, as well connections to other areas of Mathematical Sciences, as long as they are presented in a Concrete way. It welcomes also related research survey articles and book reviews.A sample list of connected mathematical areas with this publication includes and is not restricted to: Applied Analysis, Applied Functional Analysis, Probability theory, Stochastic Processes, Approximation Theory, O.D.E, P.D.E, Wavelet, Neural Networks,Difference Equations, Summability, Fractals, Special Functions, Splines, Asymptotic Analysis, Fractional Analysis, Inequalities, Moment Theory, Numerical Functional Analysis,Tomography, Asymptotic Expansions, Fourier Analysis, Applied Harmonic Analysis, Integral Equations, Signal Analysis, Numerical Analysis, Optimization, Operations Research, Linear Programming, Fuzzyness, Mathematical Finance, Stochastic Analysis, Game Theory, Math.Physics aspects, Applied Real and Complex Analysis, Computational Number Theory, Graph Theory, Combinatorics, Computer Science Math.related topics,combinations of the above, etc. In general any kind of Concretely presented Mathematics which is Applicable fits to the scope of this journal. Working Concretely and in Applicable Mathematics has become a main trend in many recent years,so we can understand better and deeper and solve the important problems of our real and scientific world. "Journal of Concrete and Applicable Mathematics" is a peer- reviewed International Quarterly Journal. We are calling for papers for possible publication. The contributor should send three copies of the contribution to the editor in-Chief typed in TEX, LATEX double spaced. [ See: Instructions to Contributors]

Journal of Concrete and Applicable Mathematics(JCAAM) ISSN:1548-5390 PRINT, 1559-176X ONLINE. is published in January,April,July and October of each year by EUDOXUS PRESS,LLC, 1424 Beaver Trail Drive,Cordova,TN38016,USA, Tel.001-901-751-3553 [email protected] http://www.EudoxusPress.com. Visit also www.msci.memphis.edu/~ganastss/jcaam. Webmaster:Ray Clapsadle Annual Subscription Current Prices:For USA and Canada,Institutional:Print $400,Electronic $250,Print and Electronic $450.Individual:Print $150, Electronic

3

$80,Print &Electronic $200.For any other part of the world add $50 more to the above prices for Print. Single article PDF file for individual $15.Single issue in PDF form for individual $60. No credit card payments.Only certified check,money order or international check in US dollars are acceptable. Combination orders of any two from JoCAAA,JCAAM,JAFA receive 25% discount,all three receive 30% discount. Copyright©2010 by Eudoxus Press,LLC all rights reserved.JCAAM is printed in USA. JCAAM is reviewed and abstracted by AMS Mathematical Reviews,MATHSCI,and Zentralblaat MATH. It is strictly prohibited the reproduction and transmission of any part of JCAAM and in any form and by any means without the written permission of the publisher.It is only allowed to educators to Xerox articles for educational purposes.The publisher assumes no responsibility for the content of published papers. JCAAM IS A JOURNAL OF RAPID PUBLICATION

4

Editorial Board Associate Editors

Editor in -Chief: George Anastassiou Department of Mathematical Sciences The University Of Memphis Memphis,TN 38152,USA tel.901-678-3144,fax 901-678-2480 e-mail [email protected] www.msci.memphis.edu/~anastasg/anlyjour.htm Areas:Approximation Theory, Probability,Moments,Wavelet, Neural Networks,Inequalities,Fuzzyness. Associate Editors: 1) Ravi Agarwal Florida Institute of Technology Applied Mathematics Program 150 W.University Blvd. Melbourne,FL 32901,USA [email protected] Differential Equations,Difference Equations, Inequalities 2) Drumi D.Bainov Medical University of Sofia P.O.Box 45,1504 Sofia,Bulgaria [email protected] Differential Equations,Optimal Control, Numerical Analysis,Approximation Theory 3) Carlo Bardaro Dipartimento di Matematica & Informatica Universita' di Perugia Via Vanvitelli 1 06123 Perugia,ITALY tel.+390755855034, +390755853822, fax +390755855024 [email protected] , [email protected] Functional Analysis and Approximation Th., Summability,Signal Analysis,Integral Equations, Measure Th.,Real Analysis 4) Francoise Bastin Institute of Mathematics University of Liege 4000 Liege

21) Gustavo Alberto Perla Menzala National Laboratory of Scientific Computation LNCC/MCT Av. Getulio Vargas 333 25651-075 Petropolis, RJ Caixa Postal 95113, Brasil and Federal University of Rio de Janeiro Institute of Mathematics RJ, P.O. Box 68530 Rio de Janeiro, Brasil [email protected] and [email protected] Phone 55-24-22336068, 55-21-25627513 Ext 224 FAX 55-24-22315595 Hyperbolic and Parabolic Partial Differential Equations, Exact controllability, Nonlinear Lattices and Global Attractors, Smart Materials 22) Ram N.Mohapatra Department of Mathematics University of Central Florida Orlando,FL 32816-1364 tel.407-823-5080 [email protected] Real and Complex analysis,Approximation Th., Fourier Analysis, Fuzzy Sets and Systems 23) Rainer Nagel Arbeitsbereich Funktionalanalysis Mathematisches Institut Auf der Morgenstelle 10 D-72076 Tuebingen Germany tel.49-7071-2973242 fax 49-7071-294322 [email protected] evolution equations,semigroups,spectral th., positivity 24) Panos M.Pardalos Center for Appl. Optimization University of Florida 303 Weil Hall P.O.Box 116595 Gainesville,FL 32611-6595 tel.352-392-9011 [email protected] Optimization,Operations Research

5

BELGIUM [email protected] Functional Analysis,Wavelets 5) Yeol Je Cho Department of Mathematics Education College of Education Gyeongsang National University Chinju 660-701 KOREA tel.055-751-5673 Office, 055-755-3644 home, fax 055-751-6117 [email protected] Nonlinear operator Th.,Inequalities, Geometry of Banach Spaces 6) Sever S.Dragomir School of Communications and Informatics Victoria University of Technology PO Box 14428 Melbourne City M.C Victoria 8001,Australia tel 61 3 9688 4437,fax 61 3 9688 4050 [email protected], [email protected] Math.Analysis,Inequalities,Approximation Th., Numerical Analysis, Geometry of Banach Spaces, Information Th. and Coding 7) Angelo Favini Università di Bologna Dipartimento di Matematica Piazza di Porta San Donato 5 40126 Bologna, ITALY tel.++39 051 2094451 fax.++39 051 2094490 [email protected] Partial Differential Equations, Control Theory, Differential Equations in Banach Spaces 8) Claudio A. Fernandez Facultad de Matematicas Pontificia Unversidad Católica de Chile Vicuna Mackenna 4860 Santiago, Chile tel.++56 2 354 5922 fax.++56 2 552 5916 [email protected] Partial Differential Equations, Mathematical Physics, Scattering and Spectral Theory

25) Svetlozar T.Rachev Dept.of Statistics and Applied Probability Program University of California,Santa Barbara CA 93106-3110,USA tel.805-893-4869 [email protected] AND Chair of Econometrics and Statistics School of Economics and Business Engineering University of Karlsruhe Kollegium am Schloss,Bau II,20.12,R210 Postfach 6980,D-76128,Karlsruhe,Germany tel.011-49-721-608-7535 [email protected] Mathematical and Empirical Finance, Applied Probability, Statistics and Econometrics 26) John Michael Rassias University of Athens Pedagogical Department Section of Mathematics and Infomatics 20, Hippocratous Str., Athens, 106 80, Greece Address for Correspondence 4, Agamemnonos Str. Aghia Paraskevi, Athens, Attikis 15342 Greece [email protected] [email protected] Approximation Theory,Functional Equations, Inequalities, PDE 27) Paolo Emilio Ricci Universita' degli Studi di Roma "La Sapienza" Dipartimento di Matematica-Istituto "G.Castelnuovo" P.le A.Moro,2-00185 Roma,ITALY tel.++39 0649913201,fax ++39 0644701007 [email protected],[email protected] Orthogonal Polynomials and Special functions, Numerical Analysis, Transforms,Operational Calculus, Differential and Difference equations 28) Cecil C.Rousseau Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA tel.901-678-2490,fax 901-678-2480 [email protected] Combinatorics,Graph Th., Asymptotic Approximations, Applications to Physics 29) Tomasz Rychlik

6

9) A.M.Fink Department of Mathematics Iowa State University Ames,IA 50011-0001,USA tel.515-294-8150 [email protected] Inequalities,Ordinary Differential Equations 10) Sorin Gal Department of Mathematics University of Oradea Str.Armatei Romane 5 3700 Oradea,Romania [email protected] Approximation Th.,Fuzzyness,Complex Analysis 11) Jerome A.Goldstein Department of Mathematical Sciences The University of Memphis, Memphis,TN 38152,USA tel.901-678-2484 [email protected] Partial Differential Equations, Semigroups of Operators 12) Heiner H.Gonska Department of Mathematics University of Duisburg Duisburg,D-47048 Germany tel.0049-203-379-3542 office [email protected] Approximation Th.,Computer Aided Geometric Design 13) Dmitry Khavinson Department of Mathematical Sciences University of Arkansas Fayetteville,AR 72701,USA tel.(479)575-6331,fax(479)575-8630 [email protected] Potential Th.,Complex Analysis,Holomorphic PDE, Approximation Th.,Function Th. 14) Virginia S.Kiryakova Institute of Mathematics and Informatics Bulgarian Academy of Sciences Sofia 1090,Bulgaria [email protected] Special Functions,Integral Transforms, Fractional Calculus 15) Hans-Bernd Knoop

Institute of Mathematics Polish Academy of Sciences Chopina 12,87100 Torun, Poland [email protected] Mathematical Statistics,Probabilistic Inequalities 30) Bl. Sendov Institute of Mathematics and Informatics Bulgarian Academy of Sciences Sofia 1090,Bulgaria [email protected] Approximation Th.,Geometry of Polynomials, Image Compression 31) Igor Shevchuk Faculty of Mathematics and Mechanics National Taras Shevchenko University of Kyiv 252017 Kyiv UKRAINE [email protected] Approximation Theory 32) H.M.Srivastava Department of Mathematics and Statistics University of Victoria Victoria,British Columbia V8W 3P4 Canada tel.250-721-7455 office,250-477-6960 home, fax 250-721-8962 [email protected] Real and Complex Analysis,Fractional Calculus and Appl., Integral Equations and Transforms,Higher Transcendental Functions and Appl.,q-Series and q-Polynomials, Analytic Number Th. 33) Stevo Stevic Mathematical Institute of the Serbian Acad. of Science Knez Mihailova 35/I 11000 Beograd, Serbia [email protected]; [email protected] Complex Variables, Difference Equations, Approximation Th., Inequalities 34) Ferenc Szidarovszky Dept.Systems and Industrial Engineering The University of Arizona Engineering Building,111 PO.Box 210020 Tucson,AZ 85721-0020,USA [email protected] Numerical Methods,Game Th.,Dynamic Systems,

7

Institute of Mathematics Gerhard Mercator University D-47048 Duisburg Germany tel.0049-203-379-2676 [email protected] Approximation Theory,Interpolation 16) Jerry Koliha Dept. of Mathematics & Statistics University of Melbourne VIC 3010,Melbourne Australia [email protected] Inequalities,Operator Theory, Matrix Analysis,Generalized Inverses 17) Mustafa Kulenovic Department of Mathematics University of Rhode Island Kingston,RI 02881,USA [email protected] Differential and Difference Equations 18) Gerassimos Ladas Department of Mathematics University of Rhode Island Kingston,RI 02881,USA [email protected] Differential and Difference Equations 19) V. Lakshmikantham Department of Mathematical Sciences Florida Institute of Technology Melbourne, FL 32901 e-mail: [email protected] Ordinary and Partial Differential Equations, Hybrid Systems, Nonlinear Analysis 20) Rupert Lasser Institut fur Biomathematik & Biomertie,GSF -National Research Center for environment and health Ingolstaedter landstr.1 D-85764 Neuherberg,Germany [email protected] Orthogonal Polynomials,Fourier Analysis, Mathematical Biology

 

Multicriteria Decision making, Conflict Resolution,Applications in Economics and Natural Resources Management 35) Gancho Tachev Dept.of Mathematics Univ.of Architecture,Civil Eng. and Geodesy 1 Hr.Smirnenski blvd BG-1421 Sofia,Bulgaria [email protected] Approximation Theory 36) Manfred Tasche Department of Mathematics University of Rostock D-18051 Rostock Germany [email protected] Approximation Th.,Wavelet,Fourier Analysis, Numerical Methods,Signal Processing, Image Processing,Harmonic Analysis 37) Chris P.Tsokos Department of Mathematics University of South Florida 4202 E.Fowler Ave.,PHY 114 Tampa,FL 33620-5700,USA [email protected],[email protected] Stochastic Systems,Biomathematics, Environmental Systems,Reliability Th. 38) Lutz Volkmann Lehrstuhl II fuer Mathematik RWTH-Aachen Templergraben 55 D-52062 Aachen Germany [email protected] Complex Analysis,Combinatorics,Graph Theory

8

EDITOR’S NOTE  This special issue on “Applied Mathematics and Approximation Theory” contains  expanded versions of articles that were presented in the international conference  “Applied Mathematics and Approximation Theory 2008” ( AMAT 08), during   October 11‐13, 2008 at the University of Memphis, Memphis, Tennessee, USA.  All articles were refereed.        The organizer and Editor               George Anastassiou 

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 9-23,2010, COPYRIGHT 2010 EUDOXUS PRESS,9 LLC

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS FOR SAMPLING MODELS ERNESTO ACOSTA-REYES Abstract. This paper studies the reconstruction of a function f belonging to a shift-invariant space from the set of its nonuniformly distributed local sampled values. Here it is shown that if the set of sampling X = {xj }j∈J satisfies a necessary density conditions, then we can recover the function f from the set of its samples geometrically fast using an iterative algorithm. In addition, the algorithm is analyzed when the data is perturbed by noise, and it is proved that a small perturbation on the set of samples causes only a small change of the original function. Moreover, it is given an upper estimate of the rate of convergence of the algorithm. On the other hand, if we assume that X is a separated set, then it is shown that X is a set of sampling and explicit stability bounds are given.

1. Introduction It is well known that in Sampling Theory there are two main goals (for an overview see [2], [4], [7]-[8], and [11]): First, given a class of functions on Rd , to find conditions on the sampling set X = {xj }j∈J , where J is a countable index set, under which a function belonging to that class can be reconstructed uniquely and stably from its samples {f (xj )}j∈J . Second, to find efficient and fast numerical algorithms for recovering the function from its samples on X. It is unrealistic to assume that the samples {f (xj )}j∈J can be measured exactly. For working with a more realistic model, we consider that our function(signal) belongs to a shift-invariant space V p (Φ), for some 1 ≤ p ≤ ∞, of the form nX o p T p d (r) (1.1) V (Φ) = Ck Φk : C ∈ (` (Z )) , k∈Zd

and that the samples of the signal have the form Z gxj (f ) = f (x)dµxj (x), Rd

Key words and phrases. Irregular sampling, Non-uniform sampling, Reconstruction, Fast algorithm, Shift-invariant spaces, Stability bounds. 1

10

2

ERNESTO ACOSTA-REYES

where µ= {µxj }j∈J is a collection of finite complex Borel measures on Rd that acts on the signal f in a neighborhood of xj to produce the data {gxj (f )}j∈J . The form of our sampled data generalizes the model presented by A. Aldroubi in [2] when for each j ∈ J the RadonNikodym derivative of µxj with respect to the Lebesgue measure on Rd belongs to L2 (Rd ). On the other hand, if the collection µ consists of Dirac measures on Rd concentrated at each point of X, then we obtain the model presented by A. Aldroubi and K. Gr¨ochenig in [4]. In this paper we apply an iterative algorithm for recovering the signal f from its samples values {gxj (f )}j∈J which uses the density properties of the set X, the support size conditions of the collection µ, and the properties of the generator Φ for V p (Φ). Here we show that the sequence of functions generated using the algorithm converges to f geometrically fast. In [12], [14]-[18], this method was used for iterative reconstruction of band-limited signals, in [2] and [4], it was used for reconstructing functions belonging to shift-invariant spaces, and in [17] it was used for reconstructing signals belonging to a weighted multiply generated shift-invariant spaces. On the other hand, if X is assumed to be a separated set, then we show that X is also a set of sampling for V p (Φ) and µ, and we give explicit stability bounds in terms of the rate of convergence of the algorithm, the generator for V p (Φ), the bounded projection from Lp (Rd ) onto V p (Φ), and the uniform upper bound for the total variations of the collection µ. Moreover, it is given an upper estimate for the rate of convergence of the iterative algorithm. The stability of the samplingreconstruction is analyzed when our local sampled data is perturbed by noise, and we show that a small perturbation of the sampled data {gxj (f )}j∈J in the `p (J) norm produces a small perturbation of our original function. The remainder of this paper has been organized as follows. Section 2 introduces our sampling model, the definitions and notations that we shall work in this paper. The main results are presented in section 3, and we provide the proof of some of the results in section 4.

2. Notation and preliminaries In this section is introduced the sampling model we use in this paper, and the notations that will be used later. The functions we are dealing with in this paper are functions f ∈ Lp (Rd ), for some p ∈ [1, ∞] and d ∈ N, which belong to a shift invariant space defined in (1.1), where Φ = (φ1 , . . . , φr )T is a vector of functions, Φk = Φ(· − k), and C = (c1 , . . . , cr )T is a vector of sequences belonging

11

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS. . .

3

to (`p (Zd ))(r) . Among the equivalent norms in (`p (Zd ))(r) we choose kCk(`p (Zd ))(r) =

r X

kci k`p (Zd ) .

i=1

Here it is assumed that the set {φ1 (· − k), . . . , φr (· − k); k ∈ Zd } generates an unconditional basis for V p (Φ). In particular, we require that there exist constants 0 < mp ≤ Mp < ∞, such that (2.1) °X ° ° ° mp kCk(`p (Zd ))(r) ≤ ° CkT Φk ° k∈Zd

Lp

≤ Mp kCk(`p (Zd ))(r) , ∀C ∈ (`p (Zd ))(r) .

The unconditional basis assumption (2.1) implies (see Theorem 2.4 in [4]) that the space V p (Φ) is a closed subspace of Lp (Rd ). Since we are interested in sampling in V p (Φ) we add an assumption that would make all the functions in these spaces continuous and, therefore, pointwise evaluations will be meaningful. Hence, we assume that the generator Φ belongs to a Wiener-amalgam space (W01 )(r) as defined below. For 1 ≤ p < ∞, a measurable function f belongs to W p if it satisfies à !1/p X kf kW p = esssup |f (x + k)|p (2.2) < ∞. k∈Zd

x∈[0,1]d

If p = ∞, a measurable function f belongs to W ∞ if it satisfies (2.3)

kf kW ∞ = sup {esssup |f (x + k)|} < ∞. k∈Zd x∈[0,1]d

Hence, W ∞ coincides with L∞ (Rd ). It is well known that for p ∈ [1, ∞], W p is a Banach space ( see [9]-[10]), and clearly W p ⊆ Lp . By (W p )(r) we denote the space of vectors Ψ = (ψ 1 , . . . , ψ r )T of W p -functions with the norm r X kΨk(W p )(r) = kψ i kW p . i=1

The closed subspace of (vectors of) continuous functions in W p (respectively, (W p )(r) ) will be denoted by W0p (or (W0p )(r) ). In this paper we are interested in average sampling performed by a countable collection of measures. We denote by M(Rd ) the Banach space of finite complex Borel measures on Rd . The norm on M(Rd ) is R given by kµk = Rd d|µ|(y), i.e., the total variation of a measure µ. Let J be a countable index set and X = {xj : j ∈ J} be a subset of Rd . The reconstruction problem in our sampling model consists of

12

4

ERNESTO ACOSTA-REYES

finding the function f ∈ V p (Φ) from the knowledge of its samples Z n o gxj (f ) = f (x)dµxj (x) , Rd

j∈J

where µ= {µxj }j∈J is a countable collection of finite complex Borel measures on Rd satisfying the following properties: (1) There exists a > 0 such that supp µxj ⊂ xj + [−a, a]d , for all j ∈ J, (2) RThere exists M > 0 such that kµxj k ≤ M , for all xj ∈ X; and (3) Rd dµxj = 1, for all j ∈ J. Definition 2.1. Let 1 ≤ p ≤ ∞ and X = {xj : j ∈ J} be a countable subset of Rd . We say that X is a set of sampling for V p (Φ) and µ= {µxj }j∈J if there exist constants 0 < Ap ≤ Bp < ∞ such that (2.4)

Ap kf kLp ≤ k{gxj (f )}k`p (J) ≤ Bp kf kLp , for all f ∈ V p (Φ).

Ap and Bp are called the stability bounds. Remark 2.1. If in the above definition we let p = 2, then applying Riesz representation theorem, it follows that (2.4) is the definition of frame. Thus, f can be reconstructed from its samples via dual frame expansion. Definition 2.2. We say that X = {xj }j∈J ⊂ Rd is separated if there exists δ > 0 such that inf i,j∈J,i6=j |xi − xj | ≥ δ. The number δ is called the separation constant of the set X. Definition 2.3. A set X = {xj : j ∈ J} ⊂ Rd is γ−dense in Rd if [ Rd = Br (xj ), ∀r ≥ γ, where Br (xj ) =

Qd

l l=1 [xj

j

− r, xlj + r).

Definition 2.4. A bounded partition of unit adapted to {Bγ (xj )}j∈J and associated with the sampling set X is a set of functions {βj }j∈J that satisfies: (1) 0 ≤ βj ≤ 1, ∀j ∈ J; (2) supp P βj ⊂ Bγ (xj ); and (3) j∈J βj = 1. Given a bounded partition of unit {βj }j∈J associated with the sampling set X, we define the operator AX on V p (Φ) as follows X (2.5) AX f = gxj (f )βj . j∈J

13

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS. . .

5

The quasi-interpolant operator QX is defined on sequences c = {cj }j∈J ∈ `p (J) by X (2.6) QX c = cj βj . j∈J

If f ∈ W0p , we write (2.7)

QX f =

X

f (xj )βj

j∈J

for the quasi-interpolant of the sequence cj = f (xj ). Remark 2.2. Note that if µxj = δxj , for all j ∈ J, where δxj is the Dirac measure on Rd concentrated at xj , then AX = QX . 3. Main Results In this section we collect the main results of our paper. Theorem 3.1. Let Φ ∈ (W01 )(r) , 1 ≤ p ≤ ∞, and P be a bounded projection from Lp (Rd ) onto V p (Φ). Then there exists a density γ0 = γ0 (Φ, P, p) > 0, and a0 = a0 (Φ, P, p) > 0 such that every f ∈ V p (Φ) can be recovered from the data {gxj (f )}j∈J on any γ−dense set X = {xj }j∈J (0 < γ ≤ γ0 ) for any support size condition (for µ) 0 < a ≤ a0 by the following iterative algorithm: (3.1)

f1 = P AX f ,

fn+1 = P AX (fn − f ) + fn .

In this case the sequence {fn }n≥1 converges to f in the W p norm, hence both in the Lp (Rd ), and uniformly. The convergence is geometric, that is, kfn − f kLp (Rd ) ≤ kfn − f kW p ≤ cp αn kf kW p , for some α = α(P, γ, a, Φ, p) < 1, and for some 0 < cp < ∞ independent of f and n ∈ N. Remark 3.1. Notice that since Φ ∈ (W01 )(r) , then by Theorem 6.2 in [4] the existence of a bounded projection P is guaranteed for all p ∈ [1, ∞], P e − k)iΦ(· − k), where and in this case it is given by P f = k∈Zd hf, Φ(· e k }k∈Zd is the canonical dual Riesz basis associated to {Φk }k∈Zd . Here {Φ R e = (hf, φe1 i, . . . , hf, φer i) ∈ Cr , hf, φei i = d f (z)φei (z)dz, for 1 ≤ hf, Φi R i ≤ r, and z denotes the complex conjugate of z. Remark 3.2. Note that Theorem 4.1 in [2] is a Corollary of Theorem 3.1 when we let p = 2 and r = 1.

14

6

ERNESTO ACOSTA-REYES

Next result shows that if the hypothesis of Theorem 3.1 takes place, and X is also a separated set, then we obtain that X is a set of sampling for V p (Φ) and µ, and explicit stability bounds are given. Theorem 3.2. Let Φ ∈ (W01 )(r) be given. Assume that X is separated with separation constant δ > 0, and P is a bounded projection from Lp (Rd ) onto V p (Φ). Then the following hold: (1) Given f ∈ V p (Φ), the sequence {fn }n≥1 defined by the algorithm (3.1) satisfies ³ 1 + α − α2 ´ (3.2) kfn kLp (Rd ) ≤ kf kLp (Rd ) , ∀n ≥ 1, 1−α where α is rate of convergence of the algorithm (3.1). (2) X is a set of sampling for V p (Φ) and µ with stability bounds given by 1−α (3.3) Ap = d , 3 k P kop N 1/p0 and (3.4)

M N 1/p 3d/p kΦk(W 1 )(r) Bp = , mp √

where N = N (δ, p, d) = ([ δd ] + 1)d , p1 + p10 = 1, [t] denotes the biggest integer lower than or equal to t, mp is the lower bound constant in condition (2.1), k P kop is the operator norm of P, and M > 0 is the uniform upper bound for the total variations of the elements in the collection µ. As a consequence of Theorem 3.1 and its proof, we obtain the following result which allows to find an estimate for the values of γ and a needed for the reconstruction algorithm (3.1). Moreover, Theorem 3.3 provides an upper estimate of the rate of convergence of the algorithm. Theorem 3.3. Assume that Φ ∈ (W01 )(r) and |∇Φ| ∈ (W01 )(r) , where |∇Φ| = (|∇φ1 |, . . . , |∇φr |)T , and ∇φi is the gradient of φi for 1 ≤ i ≤ r. Let M > 0 be such that kµxj k ≤ M , for all j ∈ J, and P be a bounded projection from Lp (Rd ) onto V p (Φ). Then we have the following upper estimate for the rate of convergence of the algorithm (3.1): ¢ k P kop ¡ γ(1+2dγe)d +M ((1+2dγe)d +2)a((1+2dae)d ) k|∇Φ|k(W 1 )(r) , α≤ mp where mp is the lower bound constant given in (2.1), and dte denotes the smallest integer bigger than or equal to t .

15

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS. . .

7

3.1. Reconstructing in presence of noise. Now we investigate the algorithm (3.1) in the case of noisy samples 0 0 {fj }j∈J ∈ `p (J), but we do not assume that and {fj }j∈J are samples of a function f ∈ V p (Φ). Then given {βj }j∈J , a bounded partition of unit associated with X, we use the initialization: (3.5)

0

f1 = P QX {fj },

fn+1 = f1 + (I − P AX )fn ,

∀n ≥ 1,

and we have the following result. 0

Theorem 3.4. Let Φ ∈ (W01 )(r) , {fj }j∈J ∈ `p (J), and P a bounded projection from Lp (Rd ) onto V p (Φ) be given. Then the algorithm (3.5) 0 converges to a function f∞ ∈ V p (Φ), which satisfies P AX f∞ = P QX {fj }. As a consequence of Theorems 3.1 and 3.4, the next result shows the stability of the sampling-reconstruction. Theorem 3.5. Let Φ ∈ (W01 )(r) , P a bounded projection from Lp (Rd ) 0 onto V p (Φ) be given, and assume the X is a separated set. Let {fj }j∈J ∈ `p (J), and f ∈ V p (Φ) with sampled values {gxj (f )}j∈J be given. Then the following holds: 0

(3.6)

kf − f∞ kLp

3d N 1/p k P kop 0 k{gxj (f ) − fj }k`p (J) , ≤ 1−α



where N = ([ δd ] + 1)d , p1 + p10 = 1, α = k I − P AX kop , f∞ ∈ V p (Φ) is the function given in Theorem 3.4, and δ > 0 is the separation constant of the set X. 4. Proofs 4.1. Auxiliary results. We begin this section with three results that are needed for the main proofs. The next Lemma collects basic facts about Wiener amalgam spaces, and shift-invariant spaces. For a proof of this Lemma see Proposition 4.2 in [1]. P T Lemma 4.1. Let Φ ∈ (W 1 )(r) , f = Ck Φk , where C ∈ (`p (Zd ))(r) , k∈Zd

and Φk = Φ(· − k), for all k ∈ Zd . Then the following hold: (4.1)

V p (Φ) ⊂ W0p , for all 1 ≤ p ≤ ∞,

if Φ ∈ (W01 )(r) . (4.2)

kf kW p ≤ kCk(`p (Zd ))(r) kΦk(W 1 )(r) .

16

8

ERNESTO ACOSTA-REYES

We also need the following Lemma which will be stated without proof (see Lemmas 5.1 and 5.2 in [2], and Lemma 8.1 in [4] and the references therein). P T Lemma 4.2. Let Φ ∈ (W01 )(r) , and f = Ck Φk , where C ∈ k∈Zd

p

d

(r)

(` (Z )) . Then: (1) The oscillation oscγ (f ) belongs to W p . (2) The oscillation oscγ Φ satisfies k oscγ Φk(W 1 )(r) ≤ ((1 + 2dγe)d + 1)kΦk(W 1 )(r) ,

(4.3)

and k oscγ Φk(W 1 )(r) → 0 as γ → 0. (3) If | 5 Φ| ∈ (W01 )(r) , then kΦk(W 1 )(r) ≤ γ(2dγe + 1)d k| 5 Φ|k(W 1 )(r)

(4.4)

(4) The oscillation oscγ (f ) satisfies (4.5) k oscγ (f )kW p ≤ kCk(`p (Zd ))(r) k oscγ Φk(W 1 )(r) ,

∀C ∈ (`p (Zd ))(r) .

In particular, k oscγ (f )kW p → 0 as γ → 0. Moreover, (4.6) k QX f kLp ≤ k QX f kW p ≤ ((1+2dγe)d +2)kCk(`p (Zd ))(r) kΦk(W 1 )(r) ,

∀C ∈ (`p (Zd ))(r) .

Lemma 4.3. Let Φ ∈ (W01 )(r) be given. Let P be a bounded projection from Lp (Rd ) onto V p (Φ). Then there exist γ0 = γ0 (Φ, P, p) > 0, and a0 = a0 (Φ, P, p) > 0 such that for any 0 < a ≤ a0 , the operator I − P AX is a contraction on V p (Φ) for any γ−dense set X with 0 < γ ≤ γ0 . Proof.PLet P be a bounded projection from Lp (Rd ) onto V p (Φ), and f= CkT Φk , where C ∈ (`p (Zd ))(r) be given. Then k∈Zd

¯ ¯ X ¯ ¯ f (xj )βj (x)¯ |f (x) − (QX f )(x)| = ¯f (x) − j∈J

¯ ¯X ¯ ¯ (f (x) − f (xj ))βj (x)¯ = ¯ j∈J



X

|f (x) − f (xj )|βj (x)

j∈J

≤ oscγ (f )(x)

X j∈J

βj (x) = oscγ (f )(x).

17

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS. . .

9

From this pointwise estimate and (4.5) we obtain kf − QX f kLp ≤ kf − QX f kW p ≤ k oscγ (f )kW p ≤ kCk(`p (Zd ))(r) k oscγ Φk(W 1 )(r) 1 kf kLp k oscγ Φk(W 1 )(r) , ≤ mp where we have used condition (2.1) in the last inequality. Consequently, (4.7)

kf − QX f kLp ≤

1 kf kLp k oscγ Φk(W 1 )(r) . mp

On the other hand,

¯X ¯ ¯ ¯ |(QX f − AX f )(x)| = ¯ (f (xj ) − gxj (f ))βj (x)¯ j∈J

¯X³Z ¯ ´ ¯ ¯ = ¯ (f (xj ) − f (z))dµxj (z) βj (x)¯ Rd

j∈J



XZ Rd

j∈J



X

|f (xj ) − f (z)|d|µxj |(z)βj (x) Z

osca (f )(xj )βj (x) Rd

j∈J

≤ M

X

d|µxj |(z)

osca (f )(xj )βj (x)

j∈J

≤ M

r X X³X j∈J

|cik | osca

´ (φ )(xj − k) βj (x). i

i=1 k∈Zd

By using Lemma 4.2, condition (2.1), Triangular inequality, and the above pointwise estimate, we have (4.8) k QX f − AX f kLp ≤

M ((1 + 2dγe)d + 2)k osca Φk(W 1 )(r) kf kLp . mp

Since f ∈ V p (Φ), then P f = f . Therefore, kf − P AX f kLp ≤ k P f − P QX f kLp + k P QX f − P AX f kLp ≤ k P kop kf − QX f kLp + k P kop k QX f − AX f kLp . Using now (4.7), (4.8), and the above inequality we get (4.9) ´ k P kop ³ k oscγ Φk(W 1 )(r) +M ((1+2dγe)d +2)k osca Φk(W 1 )(r) kf kLp . kf −P AX f kLp ≤ mp

18

10

ERNESTO ACOSTA-REYES

Let 0 < ² < k Pmkpop be given. Since k oscγ Φk(W 1 )(r) → 0 as γ → 0+ , then there exists γ0 = γ0 (², Φ, P, p) > 0, and a0 = a0 (², Φ, P, p) > 0 such that ² k oscγ Φk(W 1 )(r) ≤ , for all 0 < γ ≤ γ0 , 2 and ² M ((1 + 2dγe)d + 2)k osca Φk(W 1 )(r) ≤ , for all 0 < a ≤ a0 . 2 If we choose γ0 and a0 so that for any 0 < γ ≤ γ0 , and 0 < a ≤ a0 we have ²k P kop kf − P AX f kLp ≤ kf kLp , mp then the conclusion of the Lemma follows. ¤ 4.2. Proofs for Section 3. Now we are ready to prove our main results. Proof of Theorem 3.1. Proof. Let en = f − fn be the error after n iterations of the algorithm (3.1). Then the sequence {en }n∈N satisfies en+1 = f − fn+1 = f − fn − P AX (f − fn ) = (I − P AX )(f − fn ) = (I − P AX )(en ). By using Lemma 4.3, there exist a density γ0 > 0, and a0 > 0 such that for any 0 < γ ≤ γ0 , and 0 < a ≤ a0 , I − P AX is a contraction on V p (Φ). Therefore, by taking α := k I − P AX kop < 1, we have ken+1 kLp ≤ αken kLp , and by induction it follows that (4.10)

ken+1 kLp ≤ αn+1 kf kLp ,

and ken kLp → 0 geometrically fast. Since for V p (Φ) the W p norm, and the Lp norm are equivalent, then (4.10) also holds in the W p norm and uniformly on Rd , and Theorem 3.1 is proved. ¤ Proof of Theorem 3.2. Proof. Let us prove (3.2). Note that by hypothesis and Lemma 4.3 we have that there exists γ0 > 0 such that I − P AX is a contraction for any γ−dense set X with 0 < γ ≤ γ0 . Hence, α = k I − P AX kop < 1, and thus, the operator P AX is invertible on V p (Φ). It is not hard to show that P AX and (P AX )−1 satisfy: (4.11)

1 − α ≤ k P AX kop ≤ 1 + α,

19

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS. . .

11

and 1 1 ≤ k(P AX )−1 kop ≤ . 1+α 1−α

(4.12)

Let f ∈ V p (Φ) be given. Since {fn }n≥1 given by algorithm (3.1) satisfies fn = f1 + e1 + e2 + . . . + en−1 for n ≥ 2, f1 = P AX f , and since {en }n≥1 satisfies en = (I − P AX )en−1 , for n ≥ 1, then we get n X fn = f1 + (I − P AX )l f. l=1

Hence, kfn kLp ≤ kf1 kLp + kf kLp ³ ≤

k P AX kop +

l=1 n X

k I − P AX klop l

´

α kf kLp

l=1

³ ≤

n X

1+α+

³ 1 + α − α2 ´ α ´ kf kLp = kf kLp , 1−α 1−α

and we obtain (3.2). Let us show (3.3). Let f ∈ V p (Φ) be given. Then by Lemma 4.3, there exists γ0 > 0 such that the operator I − P AX is a contraction on V p (Φ) for any γ−dense set X with 0 < γ ≤ γ0 . Hence, α = k I − P AX kop < 1, the operator P AX is invertible, and (4.12) takes place. On the other hand, from the definition of the operators AX and QX , it follows that AX f = QX {gxj (f )}, and thus, P AX f = P QX {gxj (f )}. Therefore, f = (P AX )−1 P QX {gxj (f )}. Consequently, kf kLp ≤ k(P AX )−1 kop k P kop k QX kop k{gxj (f )}k`p (J) k P kop k QX kop k{gxj (f )}k`p (J) . ≤ 1−α In order to complete the proof of (3.3), we need an upper estimate for k QX kop . Let χ be the characteristic function of the set Bγ (0) + [0, 1]d . Clearly, we may assume without loss of generality that 0 < γ < 1. Since 0 ≤ βj ≤ 1, and supp βj ⊂ Bγ (xj ), then for all xj ∈ k + [0, 1]d , βj (x) ≤ χ(x − k). Therefore, ¯ X³ ¯X ´ X ¯ ¯ |cj | χ(x − k), cj βj (x)¯ ≤ | QX c| = ¯ j∈J

k∈Zd

j:xj ∈k+[0,1]d

20

12

ERNESTO ACOSTA-REYES

and by (4.2) in Lemma 4.1, we have à X³ X k QX ckW p ≤ k∈Zd

j:xj

|cj |

´p

!1/p kχkW 1 .

∈k+[0,1]d

Since X is a separated set with separation constant δ > 0, then there √ d d are at most N = N (δ, p, d) = ([ δ ] + 1) sampling points xj in each cube k + [0, 1]d . By applying H¨older’s inequality, we get ³ ´p X X 0 |cj | ≤ N p/p |cj |p , j:xj ∈k+[0,1]d

where

1 p

+

1 p0

j:xj ∈k+[0,1]d

= 1. Consequently, 0

k QX ckW p ≤ N 1/p k{cj }k`p (J) kχkW 1 . We leave to the reader the proof of kχkW 1 = 3d . Therefore, 0

k QX ckW p ≤ 3d N 1/p k{cj }k`p (J) , and 0

k QX kop ≤ 3d N 1/p .

(4.13) Hence,

1−α 0 kf kLp ≤ k{gxj (f )}k`p (J) , 1/p op N

for all f ∈ V p (Φ).

3d k P k

Let us show (3.4). Note that X X |gxj (f )|p = xj ∈k+[0,1]d

¯Z ¯ ¯ Rd

xj ∈k+[0,1]d



X

¯p ¯ f (z)dµxj (z)¯

kµxj k

p

kµxj k

p



|f (z)| Rd

xj ∈k+[0,1]d

X

³Z Z Rd

xj ∈k+[0,1]d

≤ Mp

X

xj ∈k+[0,1]d

|f (z)|p

esssup z∈xj

d|µxj |(z) ´p kµxj k

d|µxj |(z) kµxj k

|f (z)|p .

+[−a,a]d

Since X is a separated set, then there exist at most N = N (δ, p, d) = √ d ([ δ ] + 1)d sampling points in each cube k + [0, 1]d . Assuming without loss of generality that 0 < a ≤ 1, then X |gxj (f )|p ≤ M p 3d N esssup |f (z)|p . xj ∈k+[0,1]d

z∈k+[0,1]d

21

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS. . .

13

Consequently, by taking the sum over k ∈ Zd in the above inequality, we obtain: k{gxj (f )}k`p (J) ≤ M N 1/p 3d/p kf kW p ≤ M N 1/p 3d/p kCk(`p (Zd ))(r) kΦk(W 1 )(r) M N 1/p 3d/p kΦk(W 1 )(r) ≤ kf kLp , mp and Theorem 3.2 is proved.

¤

Proof of Theorem 3.3. Proof. The proof of Theorem 3.3 is a straightforward consequence of (4.3), (4.4), and (4.9). ¤ Proof of Theorem 3.4. Proof. Assume the hypothesis of Theorem 3.4 holds. From Lemma 4.3, the operator I − P AX is a contraction. Consequently, the sequence {fn }n≥1 defined by algorithm (3.5) converges to a function f∞ ∈ V p (Φ). By taking limits in both sides of (3.5) as n → ∞, we have: f∞ = f1 + (I − P AX )f∞ . 0

Therefore, f1 − P AX f∞ = 0. Taking into account that f1 = P QX {fj }, then the conclusion of Theorem 3.4 follows. ¤ Proof of Theorem 3.5. Proof. Assume that the hypothesis of Theorem 3.5 holds. By Lemma 4.3, there exists γ0 > 0 such that the operator I − P AX is a contraction on V p (Φ) for any γ−dense set X with 0 < γ ≤ γ0 . Hence, α = k I − P AX kop < 1, the operator P AX is invertible, and (4.12) takes place. On the other hand, from the definition of the operators AX and QX , it follows that AX f = QX {gxj (f )}, and thus, P AX f = P QX {gxj (f )}. Therefore, f = (P AX )−1 P QX {gxj (f )}. By applying now Theorem 3.4, then there exists a function f∞ ∈ V p (Φ) such 0 0 that P AX f∞ = P QX {fj }. Hence, f∞ = (P AX )−1 P QX {fj }. Consequently, 0

kf − f∞ kLp ≤ k(P AX )−1 kop k P QX kop k{gxj (f ) − fj }k`p (J) k P QX kop 0 k{gxj (f ) − fj }k`p (J) 1−α k P kop k QX kop 0 ≤ k{gxj (f ) − fj }k`p (J) . 1−α ≤

22

14

ERNESTO ACOSTA-REYES

Now the conclusion of Theorem 3.5 follows by using (4.13).

¤

Acknowledgments I would like to thank Professor Akram Aldroubi for his valuable suggestions, insightful remarks, and editorial advice.

References [1] E. Acosta-Reyes, A. Aldroubi, and I. Krishtal, On stability of samplingreconstruction models, to appear in Adv. in Comp. Math., 2008. [2] A. Aldroubi, Non-uniform weighted average sampling and reconstruction in shift-invariant and wavelet spaces, Appl. Comput. Harmon. Anal. 13, 2002,151161. [3] A. Aldroubi, K. Gr¨ochenig, Beurling-Landau-type theorems for non-uniform sampling in shift invariant spline spaces, J. Fourier Anal. Appl. 6, 2000, 93-103. [4] A. Aldroubi, K. Gr¨ochenig, Nonuniform sampling and reconstruction in shiftinvariant spaces, SIAM Rev. 43, 2001, 585-620. [5] J. J. Benedetto, Irregular sampling and frames. In: Wavelets: A Tutorial in Theory and Applications (C.K. Chui, Ed.) Boston: Academic Press, 1992, 445-507. [6] P. L. Butzer, J. Lei, Approximation of signals using measured sampled values error analysis. Comm. Appl. Anal., 4, 2000, 245-255. [7] P. L. Butzer, W. Splett¨ober, R. L. Sten, The sampling theorem and linear prediction in signal analysis, Jahresber. Deutsch. Math.-Verein 90, 1988, 1-70. [8] W. Chen, S. Itoh, and J. Shiki, On Sampling in Shift Invariant Spaces, IEEE Trans. on Information Theory, 48, no. 10, 2002, 2802–2809. [9] H. G. Feichtinger, Generalized amalgams with applications to Fourier Transform., Can. J. Math. 42(3), 1990, 395-409. [10] H. G. Feichtinger, Wiener amalgam over Euclidean spaces and some of their applications, in: K. Jarosz (Ed.) Proc. Conf. Function Spaces, Edwardsville, IL, USA, 1990, in: Lecture Notes in Pure and Appl. Math., vol.136, 1992, 107-121. [11] H. G. Feichtinger, K. Gr¨ochenig, Theory and practice of irregular sampling. In wavelets: Mathematics and Applications(J. Benedetto, M. Frazier, eds.) Boca Raton, FL: CRC Press, 305-363. [12] K. Gr¨ochenig, Reconstruction algorithms in irregular sampling, Math. Comput., 59, 1992, 181-194. [13] F. Maruasti and M. Analoui, Recovery signals from nonuniform sampling using iterative methods, Proc. Internat. Sympos. Circuits Systems, Portland, OR, 1987. [14] K. D. Sauer and J. P. Allebach, Iterative reconstruction of band-limited images from nonuniformly spaced samples, IEEE Trans. Circuits and Systems, 34 1987, 1497-1506. [15] W. Sun, X. Zhou, Reconstruction of bandlimited functions from local averages, Const. Approx. 18, 2002, 205-222. [16] R. G. Wiley, Recovery of band-limited signals from unequally spaced samples., IEEE Trans. Comm., 26, 1978, 135-137.

23

ITERATIVE RECONSTRUCTION AND STABILITY BOUNDS. . .

15

[17] J. Xian and S. Li, Sampling set conditions in weighted multiply generated shift-invariant spaces and their applications, Appl. Comput. Harmon. Anal. 23, 2007, 171-180. [18] S. Yeh and H. Stark, Iterative and one-step reconstruction from nonuniform samples by convex projections, J. Opt. Soc. Amer. A7, 1990, 491-499. [19] D. C. Youla and H. Webb, Image restoration by method of convex projections: Part 1-Theory, IEEE Trans. Med. Image 1, 1982, 81-94. Dept. of Mathematics, Vanderbilt University, Nashville, TN 37240, USA., email: [email protected]

JOURNAL 24 OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 24-40,2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

GLOBAL EXISTENCE AND BLOW UP FOR SOLUTIONS TO HIGHER ORDER BOUSSINESQ SYSTEMS ´ GODEFROY AKMEL DE

Laboratoire de Mathematiques Appliqu´ ees,UFRMI, Universit´ e d’Abidjan Cocody, 22 BP 582 Abidjan 22, Cote d’Ivoire. E-mail [email protected]

Abstract. In this paper, we deal with higher order Boussinesq systems of equations in one dimension, that has been presented by Bona, Chen and Saut in [1]. We show that the solutions of these systems of equations with a nonlinear power α ≥ 1 are global and decay in time for small initial data, and we show also that they blow-up in finite time. Keywords decay in time, Boussinesq equation, blow-up AMS Subject Classification: 35B40; 35Q10; 35Q20

1. Introduction In this paper, consideration is given to the higher order Boussinesq system (1.1) (1.2)

ηt − bηxxt + b2 ηxxxxt + ux + auxxx = −(f1 (η, u))x + b2 (f1 (η, u))xxx , ut − duxxt + d2 uxxxxt + ηx + aηxxx = −(f2 (η, u))x + d2 (f2 (η, u))xxx ,

with initial data η(x, 0) = η0 (x), u(x, 0) = u0 (x), and where the nonlinearities f1 (η, u) = η α u + c1 ηuα , f2 (η, u) = η α+1 + c2 uα+1 , with α ≥ 1 integer, c1 , c2 ∈ R; the constants a, b, d, b2 , d2 ∈ R verify some conditions below, and x ∈ R, t ≥ 0.. The system (1.1)-(1.2) describes the propagation of surface water waves. Here, the independent variable, x, is proportional to distance in the directional of propagation while t is proportional to elapsed time. The quantity η(x, t) + h0 corresponds to the total depth of the liquid at the point x at time t, where h0 is the indisturb water depth. The variable u(x, t) reprents the horizontal velocity at the point (x, t). Our study here is devoted to the following particular cases for the system (1.1)-(1.2): C1 = [b > 0, d ≥ 0, b2 = 0, d2 > 0, with a > 0 : case C1−1 or a = 0 : case C1−2] C2 = [b ≥ 0, d > 0, b2 > 0, d2 = 0, with a > 0 : case C2−1 or a = 0 : case C2−2] C3 = [a = 0, b ≥ 0, d ≥ 0, b2 > 0, d2 > 0] C4 = [b > 0, d > 0, b2 = d2 = 0, with a > 0 : case C4 − 1 or a = 0 : case C4 − 2]   C5 − 1 : a 6= 0, b > 0, d > 0, b2 = 0, d2 ≥ 0, or C5 =  C5 − 2 : a 6= 0, b > 0, d > 0, b2 ≥ 0, d2 = 0. A local existence result has been obtained by Bona et all. in [1] for the full I.V.P (1.1)-(1.2) when α = 1 and a < 0, b ≥ 0, d ≥ 0, b2 > 0, d2 > 0, c1 , c2 ∈ R. On the other hand, the case (C4-2) which corresponds to the purely BBM-Boussinesq and the case (C4-1) have been shown by the same authors to be locally well posed 1

25

´ GODEFROY AKMEL DE

2

in Hs (R) s ≥ 0 ( see Bona et all. [1]). Global existence in Hs (R) s ≥ 1 has been proved in [1] for (1.1)-(1-2) under the condition a < 0, b = d > 0, b2 = d2 = 0 with α = 1. Note that this condition is include in the case (C5). Furthermore, when studying the system (1.1)-(1.2) under the restriction (C4-1) with α = 1 and with complete or partial dissipation, Chen and Goubet showed in [2] that the solution of this system decays to zero when t goes to +∞. Our study here is concerned with the asymptotic behavior of the solution to the Boussinesq system (1.1)-(1.2). In the first part, we show that the solution of (1.1)(1.2) under the restrictions (C1), (C2), (C3) or (C4), and with α > 5 (for the cases C1-1, C2-1, C4-1) or α > 9, (for the cases C1-2, C2-2, C3, C4-2), decays to zero when t goes to +∞. In the second part, we show that the solution of (1.1)-(1.2) under the restriction (C5) with a = −b = −d2 and where α ≥ 1, blows-up in a finite time. 1.1. Notation. The notation k · kr,p is used to denote the norm in Lpr (= Hr,p ) such ∂ 2 r/2 , then for u ∈ Lpr (R), kukr,p = kukLpr = kJ r ukLp < that if we set J r = (1 − ∂x 2) ∞. Also, |·|p instead of k·k0,p denotes the norm in Lp , and Hs is used instead of L2s . Throughout the paper, c represents a generic constant independent of t and x. The Fourier transform of a function f is denoted by fb(ξ) or F(f )(ξ) and F −1 (f ) ≡ f˘ denotes the inverse Fourier transform of f . 2. Local Existence in time. In this section, we study the local existence in time for the solution to the Cauchy problem associated to (1.1)-(1.2) under the conditions C1, C2, C3, C4, or C5 above. We prove the following theorem: Theorem 2.1. Let η0 , u0 ∈ Hs+1 (R), s > 21 real. Then there exists a positive constant T0 > 0 and a unique solution (η, u) ∈ C(0, T0 , Hs (R)) of the Cauchy problem associated with (1.1)-(1.2) under the conditions C1, C2, C3, C4 or C5. Proof. The system of equations (1.1)-(1.2) can be written (2.1)

Ut = AU + DF (U )

where U= 

  η , u

 F (U ) =

2

−(1 −

∂2 d ∂x 2

+

∂ 4 −1 d2 ∂x (1 4)

4

2

∂ ∂ ∂ ∂ −1 −(1 − b ∂x (1 + a ∂x 2 + b2 ∂x4 ) 2 ) ∂x

0

A=

 f1 (η, u) , f2 (η, u)

+

 

∂2 ∂ a ∂x 2 ) ∂x

0

and 

2

4

2

∂ ∂ ∂ ∂ −1 −(1 − b ∂x (1 − b2 ∂x 2 + b2 ∂x4 ) 2 ) ∂x

0

D= 0

−(1 −

∂2 d ∂x 2

+

∂ 4 −1 d2 ∂x (1 4)



∂2 ∂ d2 ∂x 2 ) ∂x

b The operator A has a matrix valued symbol A(ξ), ξ ∈ R, which is diagonalizable, e b Hence we can write A e = and we note A the diagonalized matrix symbol of A.

 .

26

GLOBAL EXISTENCE

3

b Pb)−1 , where Pb = (b PbA( pij ), i, j = 1, 2 is the matrix of the eigenvectors associated with the eigenvalues iλ1 and iλ2 with λ1 (ξ) = p

ξ(1 − aξ 2 ) (1 + bξ 2 + b2 ξ 4 )(1 + dξ 2 + d2 ξ 4 )

= −λ2 (ξ).

Let S(t)(ϕ1 , ϕ2 ) = F −1 (eAt (ϕ1 , ϕ2 )) = (F −1 (eAt ϕ1 ), F −1 (eAt ϕ2 )) = (S1 (t)ϕ1 , S2 (t)ϕ2 ), e

e

e

be the semi-group that occurs in the computation of the ordinary differential equation Ut = AU it is done in the eigenvectors basis, with for j = R whenever 1 ixξ+itλj (ξ) e ϕ cj dξ. Then, after a few computations in which we 1, 2, Sj (t)ϕj = 2π R diagonalize the matrix symbol of A and use the Duhamel formula, we obtain the following integral form of the solution to the I.V.P. (1.1)-(1.2): (2.2) η(x, t)

= p11 ∗ [S1 (t)(l11 ∗ η0 + l12 ∗ u0 ) + S2 (t)(l11 ∗ η0 + l12 ∗ u0 )] Z t + p11 ∗ S1 (t − τ )(h11 ∗ f1 (η, u) + h12 ∗ f2 (η, u))dτ 0 Z t + p11 ∗ S2 (t − τ )(h21 ∗ f1 (η, u) + h22 ∗ f2 (η, u))dτ, 0

(2.3) u(x, t)

= p21 ∗ [S1 (t)(l11 ∗ η0 + l12 ∗ u0 ) − S2 (t)(l11 ∗ η0 + l12 ∗ u0 )] Z t + p21 ∗ S1 (t − τ )(h11 ∗ f1 (η, u) + h12 ∗ f2 (η, u))dτ 0

Z t − p21 ∗ S2 (t − τ )(h21 ∗ f1 (η, u) + h22 ∗ f2 (η, u))dτ, 0

where 1 , pb11 = pb12 = p 1 + bξ 2 + b2 ξ 4

1 pb21 = −b p22 = − p , 1 + dξ 2 + d2 ξ 4

1p 1p b l11 = b l21 = 1 + bξ 2 + b2 ξ 4 , b l12 = b l22 = − 1 + dξ 2 + d2 ξ 4 , 2 2 −iξ(1 + b2 ξ 2 ) −iξ(1 + d2 ξ 2 ) b h11 = b h21 = p , b h12 = b h22 = p . 2 1 + bξ 2 + b2 ξ 4 2 1 + dξ 2 + d2 ξ 4 To finish the proof of the theorem 2.1 and for the sequel we need the following inequality obtained thanks to the use of the Sobolev embedding Hs (R) ⊂ 1 1 L∞ (R), s > , and thanks to the fact that Hs (R), s > , is an algebra : 2 2 (2.4) kf1 (η, u)ks + kf2 (η, u)ks

≤ c(kη α uks + kuα ηks + kη α+1 ks + kuα+1 ks ) α−1 ≤ c(|η|α−1 ∞ kηks kuks + |u|∞ kηks kuks 2 α−1 2 +|η|α−1 ∞ kηks + |u|∞ kuks ) α α+1 ≤ c(kηkα + kukα+1 ) s kuks + kuks kηks + kηks s

and we need the inequalities in the following lemma : Lemma 2.2. Let a, b, d, b2 , d2 in (1.1) satisfy the conditions (C1), (C2), (C3), (C4) ∂2 ∂4 or (C5). Then let ψ ∈ Hs+1 (R), s > 21 , M1 = 1 − b ∂x 2 + b2 ∂x4 , M2 = 1 −

27

´ GODEFROY AKMEL DE

4 2

4

∂ ∂ d ∂x 2 + d2 ∂x4 , and pij , hij , lij ∀i, j = 1, 2 defined above. Then we have the following inequalities ∀i, j = 1, 2,

(2.5) (2.6) (2.7)

kSj (r)ψks + kpij ∗ Sj (r)ψks + kpij ∗ Sj (r)(hij ∗ ψ)ks kSj (r)(hij ∗ ψ)ks + kpij ∗ Sj (r)(lij ∗ ψ)ks kSj (r)(lij ∗ ψ)ks

≤ kψks ≤ kψks+1 ≤ kMj ψks

Proof. . Proof of the lemma 2.2 Thanks to the definitions of pij , hij , lij and Mj above, we have for all i, j = 1, 2 and for all a, b, d, b2 , d2 satisfying the conditions (C1), (C2), (C3), (C4) or (C5), the inequalities

(2.10)

ˆ ij | ≤ c |ˆ pij | + |ˆ pij h 1 ˆ ij | + |ˆ |h pij ˆlij | ≤ c(1 + ξ 2 ) 2 ˆ = |ˆl21 ψ| ˆ = |Λ ˆ 1 F(M1 ψ)| ≤ |M \ |ˆl11 ψ| 1 ψ|

(2.11)

ˆ = |ˆl22 ψ| ˆ = |Λ ˆ 2 F(M2 ψ)| |ˆl12 ψ|

(2.8) (2.9)

b 1 (ξ) = √ where Λ 2

1 1+bξ 2 +b2 ξ 4

b 2 (ξ) = √ and Λ 2

\ ≤ |M 2 ψ|

1 . 1+dξ 2 +d2 ξ 4

Then with the inequalities

(2.8), (2.9), (2.10), (2.11) and thanks to the Plancherel theorem and the definition of the norm Hs and of Sj , we are lead to the inequalities of the lemma 2.2.  Now, consider the complete metric space F = {(η, u) ∈ (C(0, T ; Hs (R)))2 , sup kηks + sup kuks ≤ µ}, s > [0,T ]

[0,T ]

1 , 2

where µ is a positive real constant. Then an application of the contraction-mappingprinciple to F combined with the inequalities (2.4), (2.5), (2.6), (2.7), above, and an appropriate choice of T yields to the local existence result of the theorem 2.1.  3. Linear Estimates The purpose of this section is to study the linear equation associated with the I.V.P (1.1)-(1.2) under the restrictions C1, C2, C3, or C4. We establish also linear estimates needed for the next section. For that we give some decay estimates and useful inequalities of the solution of the linearized system (1.1)-(1.2) via decay estimates of the semi-group S(t)(ϕ1 , ϕ2 ) = (S1 (t)ϕ1 , S2 (t)ϕ2 ). Consider the linear problem associated to (1.1)-(1.2):   ηt − bηxxt + b2 ηxxxxt + ux + auxxx = 0 x, t ∈ R, (LP )  ut − duxxt + d2 uxxxxt + ηx + aηxxx = 0 with initial data η(x, 0) = η0 (x), u(x, 0) = u0 (x). We prove the following theorem Theorem 3.1. Let η0 (x), J 2 η0 (x), Mj η0 (x), u0 (x), J 2 u0 (x), Mj u0 (x) ∈ H6 (R)∩ L1 (R) where for each j = 1, 2, Mj is defined above in lemma 2.2. Then the solution (η, u) of the linear problem (LP ) satisfies (3.1)  1  c(1 + t)− 4 , in the case C1 − 1, C2 − 1, C4 − 1; |η(x, t)|L∞ (R) +|u(x, t)|L∞ (R) ≤  1 c(1 + t)− 8 , in the case C1 − 2, C2 − 2, C3, C4 − 2. for all t ≥ 0, x ∈ R, where c does not depend on x or t.

28

GLOBAL EXISTENCE

5

This theorem is a consequence of the following lemma : Lemma 3.2. Let for j = 1, 2, ϕj , J 2 ϕj , Mj ϕj ∈ H6 (R) ∩ L1 (R) where Mj are defined above in lemma 2.2, and let Sj (t) be as defined above as the components of the semi-group S(t) that occurs in the computation of the linear equation (LP ). Then for each i, j = 1, 2 with pi,j , hi,j , li,j defined above we have the estimates, ∀t ≥ 0, ∀x ∈ R, 1

(3.2)

|pi,j ∗ Sj (t)ϕj (x)|L∞ + |Sj (t)ϕj (x)|L∞ ≤ c(1 + t)− 4 (kϕj k6 + |ϕj |1 ) if a, b, d, b2 , d2 satisf y (C1 − 1), (C2 − 1) or (C4 − 1),

(3.3)

|pi,j ∗ Sj (t)ϕj (x)|L∞ + |Sj (t)ϕj (x)|L∞ ≤ c(1 + t)− 8 (kϕj k6 + |ϕj |1 ) if a, b, d, b2 , d2 satisf y (C1 − 2), (C2 − 2), (C3) or (C4 − 2),

(3.4)

|pi,j ∗ Sj (t)(hi,j ∗ ϕj )|L∞ ≤ c(1 + t)− 4 (kϕj k8 + |J 2 ϕj |1 ) if a, b, d, b2 , d2 satisf y (C1 − 1), (C2 − 1) or (C4 − 1),

(3.5)

|pi,j ∗ Sj (t)(hi,j ∗ ϕj )|L∞ ≤ c(1 + t)− 8 (kϕj k8 + |J 2 ϕj |1 ) if a, b, d, b2 , d2 satisf y (C1 − 2), (C2 − 2), (C3) or (C4 − 2),

(3.6)

|pi,j ∗ Sj (t)(li,j ∗ ϕj )|L∞ ≤ c(1 + t)− 4 (kMj ϕj k6 + |Mj ϕj |1 ) if a, b, d, b2 , d2 satisf y (C1 − 1), (C2 − 1) or (C4 − 1),

(3.7)

|pi,j ∗ Sj (t)(li,j ∗ ϕj )|L∞ ≤ c(1 + t)− 8 (kMj ϕj k6 + |Mj ϕj |1 ) if a, b, d, b2 , d2 satisf y (C1 − 2), (C2 − 2) (C3), or (C4 − 2),

1

1

1

1

1

where c is independent of ϕj , x In order to prove the lemma 3.2 we need the following proposition. Proposition 3.3. Given x ∈ R and t ∈ R+ , consider the phases functions ψj (ξ) = λj (ξ) + t−1 xξ, j = 1, 2 where λj are defined above. Then under the conditions C1, C2, C3 or C4, ψj , j = 1, 2 have a finite number of stationary points. Moreover for each j = 1, 2, there exists at least a stationary point ξsj of ψj which verifies (3.8)

2 R(ξsj )≥0

where R(y) = 1−3ay−[bd+b2 +d2 +2a(b+d)]y 2 −[2(b2 d+bd2 )+a(bd+b2 +d2 )]y 3 −3b2 d2 y 4 +ab2 d2 y 5 is variable whenever a, b, d, b2 , d2 satisfy the conditions C1, C2, C3 or C4 Proof. Proof of proposition 3.3. As given above we have (for the full system (1.1)(1.2)) ξ(1 − aξ 2 ) λj (ξ) = (−1)j−1 p . 2 (1 + bξ + b2 ξ 4 )(1 + dξ 2 + d2 ξ 4 )

29

´ GODEFROY AKMEL DE

6

Then for j = 1, 2, ψj 0 (ξ) = 0

⇔ λj 0 (ξ) + t−1 xξ = 0 ⇔

{1 − 3aξ 2 − [bd + b2 + d2 + 2a(b + d)]ξ 4 − [2(b2 d + bd2 ) + a(bd + b2 + d2 )]ξ 6 −3b2 d2 ξ 8 + ab2 d2 ξ 10 }2 − x2 t−2 (1 + bξ 2 + b2 ξ 4 )3 (1 + dξ 2 + d2 ξ 4 )3 = 0

so that ψj 0 (ξ) = 0 ⇔ P (ξ 2 ) = 0

(3.9) where 2

P (y) = R(y) − x2 t−2 (1 + by + b2 y 2 )3 (1 + dy + d2 y 2 )3 with R(y) = 1−3ay−[bd+b2 +d2 +2a(b+d)]y 2 −[2(b2 d+bd2 )+a(bd+b2 +d2 )]y 3 −3b2 d2 y 4 +ab2 d2 y 5 . Therefore thanks to (3.9), the stationary points of ψ are such that their square are roots of P (y). Hence since P (y) is a polynomial of degree 12 so that it has at most 12 roots, we deduce from (3.9) that each ψj , j = 1, 2 has at most 24 stationary points in R. Let us prove now the second part of the proposition 3.3 (for all the cases C1, C2, C3 or C4). Proof for the case C1. If a, b, d, b2 , d2 satisfy C1, then (3.9) is verified by 2

P (y) = R(y) − x2 t−2 (1 + by)3 (1 + dy + d2 y 2 )3 where R(y) = 1 − 3ay − (bd + d2 + 2ab + 2ad)y 2 − (2bd2 + abd + ad2 )y 3 . We see that for fixed x and t large enough, P (0) = 1 − x2 t−2 ≥ 0.

(3.10)

Moreover, R(0) = 1 > 0 and limξ−→∞ R(ξ 2 ) = −∞ so that since ξ 7−→ R(ξ 2 ) is continuous, there exists at least a ξ0 ≥ 0 such that R(ξ02 ) = 0. Henceforth, with a such ξ0 we have (3.11)

P (ξ02 ) = −x2 t−2 (1 + bξ02 )3 (1 + dξ02 + d2 ξ04 )3 .

Therefore thanks to (3.11) and since b, d, d2 are positives, we have P (ξ02 ) ≤ 0.

(3.12)

Hence since ξ 7−→ P (ξ 2 ) is continuous, we deduce from (3.10)-(3.12) that the equation P (ξ 2 ) = 0 has at least one solution ξs ∈]0, ξ0 [ where ξ0 ≥ 0 is a root of R(ξ 2 ). We deduce with (3.9) that for each j = 1, 2, ψj has at least one stationary point ξsj ∈]0, ξ0 [. Moreover, since for each j = 1, 2, ψj has a stationary point ξsj which verifies (3.13)

0 < ξsj < ξ0 2

where ξ0 is a root of R(ξ ), and hence since for all y ≥ 0, R(y) is a decreasing function, we get for each j = 1, 2, thanks to (3.13) (3.14)

2 R(ξsj ) ≥ R(ξ02 ) = 0

30

GLOBAL EXISTENCE

7

and the inequality (3.14) finishes the proof of the proposition 3.3 for the case C1. 2 (Note here that if we had ξ0 < ξsj < 0 then we would have 0 < ξsj < ξ02 and consequently (3.14)). Proof for the case C2. The proof of the proposition 3.3 for the case C2 follows from that of the case C1 by symetry. Proof for the cases C3 and C4. For the cases C3 and C4, we see that the functions P (y) and R(y) from the case C3 and that from the case C4 verify the same properties as that of the functions P (y) and R(y) in the proof for the case C1, particulary the properties (3.9), (3.10), (3.12), (3.14). Therefore, following the same lines of the proof for the case C1, we find the proof of the second part of the proposition 3.3 for the cases C3 and C4. This finishes the proof of the proposition 3.3.  Let us prove now the lemma 3.2 Proof. . Proof of the lemma 3.2 If 0 ≤ t ≤ 1, we have thanks to the definitions of Sj (t), j = 1, 2, above in section 2, and ψj in proposition 3.3, and thanks to the Schwartz inequality, we have (3.15) |Sj (t)ϕj (x)| =

Z 1 | 2π

eitψj (ξ) ϕˆj (ξ)dξ| ≤ c

Z

R

(1 + ξ 2 )−6 dξ

 12 kϕj k6

R 1

≤ ckϕj k6 ≤ c(1 + t)− 4 kϕj k6 . 1

If t ≥ 1, let Ω = {ξ ∈ R |ξ| ≤ t 5m }, m ≥ 1, and qjt (ξ) = χΩ (ξ)eitλj (ξ) ; then thanks to the Schwartz and the Young inequalities, Z Z 1 (3.16) |Sj (t)ϕj (x)| = |( + )eitλj (ξ)+ixξ ϕˆj (ξ)dξ| 2π c Ω Ω ≤ c|˘ qjt (x) ∗ ϕj (x)|∞ Z  21 Z  21 2 −6 2 6 2 +c (1 + ξ ) dξ (1 + ξ ) |ϕˆj (ξ)| dξ Ωc

Ωc 1 −m

≤ c|˘ qjt (x)|∞ |ϕj |1 + ct

kϕj k6 .

It remains to estimate q˘jt (x). We need for the sequel the following notations: Let 0 Es = {ξ ∈ R, ψj (ξ) = 0, j = 1, 2} be the set of stationary points of ψj . We know from the proposition 3.3 that Es has a finite number of elements. Hence set for m ≥ 1, 1

N (ζ, t− m ) =

¯ t− m1 ) B(ζ,

[ ζ∈Es

[

1

{ξ ∈ R |ξ + ζ| ≤ t− m }

[

1

{ξ ∈ R |ξ| ≤ t− m }

ζ∈Es

¯ t− m1 ) = {ξ ∈ R |ξ − ζ| ≤ t− m1 }. Then we have where for each ζ ∈ Es , B(ζ, Z (3.17)

q˘jt (x) = (

Z 1 Ω∩N (ζ,t− m

+ )

1 Ω∩{N (ζ,t− m

)eitλj (ξ)+ixξ dξ = I1 + I2 . )}c

31

´ GODEFROY AKMEL DE

8

Since from proposition 3.3, card(Es ) < ∞, we get Z XZ |I1 | ≤ (3.18) dξ ≤ 1 Ω∩N (ζ,t− m )

+

ζ∈Es

XZ ζ∈Es

1



− ¯ m) B(ζ,t

Z 1 {|ξ+ζ|≤t− m

dξ + }

1

1 {|ξ|≤t− m

dξ ≤ ct− m . }

1 −m

For I2 , we point out that on {N (ζ, t )}c , ψj , j = 1, 2 have no stationary point so that we can integrate I2 by parts as follows: Z (3.19) eitψj (ξ) dξ| |I2 | = | 1 Ω∩{N (ζ,t− m )}c Z 1 ∂  itψj (ξ)  −1 dξ| = t | e ∂ 1 Ω∩{N (ζ,t− m )}c ∂ξ ψj (ξ) ∂ξ ! Z 1 ∂ −1 ≤ t | |dξ 1 ∂ Ω∩{N (ζ,t− m )}c ∂ξ ∂ξ ψj (ξ) Z dξ −1 +t ∂ − 1 ∂{Ω∩{N (ζ,t m )}c } | ∂ξ ψj (ξ)| Z ∂2 | ∂ξ 2 ψj (ξ)| −1 ≤ ct dξ ∂ 1 2 Ω∩{N (ξsj ,t− m )}c | ∂ξ ψj (ξ)| where for each j = 1, 2, ξsj is a stationary point of ψj , which satisfies the property 2 R(ξsj ) ≥ 0 in the proposition 3.3. Before continue, let us give the needed following inequalities useful for the proof of 1 the lemma 3.2 : We claim that on Ω∩{N (ξsj , t− m )}c , and for m ≥ 3, if a, b, d, b2 , d2 satisfy (C1-1), (C2-1) or (C4-1), then for each j = 1, 2,  3 00 if |ξ| ≤ 1  ct m |ψj (ξ)| (3.20) ≤ 0  14 |ψj (ξ)|2 ct 5m otherwise and if a, b, d, b2 , d2 satisfy (C1-2), (C2-2), (C3) or (C4-2), then for each j = 1, 2,  7 00  ct m if |ξ| ≤ 1 |ψj (ξ)| (3.21) ≤ 0  3 |ψj (ξ)|2 ct m otherwise. Proof. Proof of (3.20) and (3.21). Thanks to ψj and λj as defined above in proposition 3.3 and in section 2, we have : 0

0

ψj (ξ) = λj (ξ) + t−1 x. We find (after obvious computations), 0 R(ξ 2 ) λj (ξ) = (−1)j−1 p Q3 (ξ 2 )

where for the full system (1.1)-(1.2), R(y) = {1−3ay−(bd+b2 +d2 +2ab+2ad)y 2 −(2b2 d+2bd2 +abd+ab2 +2ad2 )y 3 −3b2 d2 y 4 +ab2 d2 y 5 } and Q(y) = (1 + by + b2 y 2 )(1 + dy + d2 y 2 ).

32

GLOBAL EXISTENCE

9

Then, with ξsj ∈ Es chosen as in the proposition 3.3, we get 0

0

0

0

0

|ψj (ξ)| = |ψj (ξsj ) − ψj (ξ)| = |λj (ξsj ) − λj (ξ)|

(3.22) where 0

0

λj (ξsj ) − λj (ξ)

=

2 R(ξsj ) R(ξ 2 ) −p } (−1)j−1 { q 2 ) Q3 (ξ 2 ) Q3 (ξsj

=

(−1)j−1 q { 2 )Q3 (ξ 2 ) Q3 (ξsj 2 2 2 2 R(ξsj )(Q(ξ 2 ) − Q(ξsj ))(Q2 (ξ 2 ) + Q(ξ 2 )Q(ξsj ) + Q2 (ξsj )) q p 2 3 3 2 Q (ξsj ) + Q (ξ )

(3.23)

+

q  2 ) R(ξ 2 ) − R(ξ 2 ) }. Q3 (ξsj sj

The following relations are also useful for the proof of the inequalities (3.20), (3.21): (3.24)

|ξ1 | + |ξ2 | = |ξ1 + ξ2 | if sgn(ξ1 ) = sgn(ξ2 )

(3.25)

|ξ1 | + |ξ2 | = |ξ1 − ξ2 | if sgn(ξ1 ) 6= sgn(ξ2 )

The proof of (3.24) and (3.25) is as follows : If sgn(ξ1 ) = sgn(ξ2 ) then |ξ1 | + |ξ2 | = ||ξ1 | + |ξ2 || = |sgn(ξ1 )ξ1 + sgn(ξ2 )ξ2 | = |ξ1 + ξ2 | If sgn(ξ1 ) 6= sgn(ξ2 ) that is if sgn(ξ1 ) = −sgn(ξ2 ) then |ξ1 | + |ξ2 | = ||ξ1 | + |ξ2 || = |sgn(ξ1 )ξ1 + sgn(ξ2 )ξ2 | = |ξ1 − ξ2 |. We are now able to pull up the proof of the inequalities (3.20) and (3.21) for the different cases C1, C2, C3, C4 : Proof of (3.20) for the case C1-1. In the case C1-1 we have thanks to the definitions of R and Q above, 2 2 2 2 4 R(ξsj )−R(ξ 2 ) = (ξ 2 −ξsj )[3a+(bd+d2 +2ab+2ad)(ξ 2 +ξsj )+(2bd2 +ad2 +abd)(ξ 24 +ξ 2 ξsj +ξsj )]

and 2 2 2 2 4 Q(ξ 2 ) − Q(ξsj ) = (ξ 2 − ξsj )[b + d + (bd + d2 )(ξ 2 + ξsj ) + bd2 (ξ 24 + ξ 2 ξsj + ξsj )]

so that thanks to (3.23), 0

0

λj (ξsj ) − λj (ξ)

=

2 (−1)j−1 (ξ 2 − ξsj ) q { 2 )Q3 (ξ 2 ) Q3 (ξsj 2 2 2 R(ξsj )(Q2 (ξ 2 ) + Q(ξ 2 )Q(ξsj ) + Q2 (ξsj )) 2 q [b + d(bd + d2 )(ξ 2 + ξsj ) p 2 3 3 2 Q (ξsj ) + Q (ξ ) 2 4 + bd2 (ξ 24 + ξ 2 ξsj + ξsj )] +

(3.26)

q 2 )[3a + (bd + d + 2ab + 2ad)(ξ 2 + ξ 2 ) Q3 (ξsj 2 sj

2 4 + (2bd2 + ad2 + abd)(ξ 24 + ξ 2 ξsj + ξsj )]}.

33

´ GODEFROY AKMEL DE

10

2 ) ≥ 0, and since Q(ξ 2 ) ≥ 0 ∀ ξ ∈ R, we find Then, since by proposition 3.3 R(ξsj with (3.22), (3.26), and the fact that a, b, d, b2 , d2 ≥ 0, 0

|ψj (ξ)| (3.27)





2 |ξ 2 − ξsj | p {3a + (bd + d2 + 2ab + 2ad)ξ 2 + (2bd2 + ad2 + abd)ξ 4 } Q3 (ξ 2 ) 2 c|ξ 2 − ξsj | 3

1

(1 + bξ 2 ) 2 (1 + dξ 2 + d2 ξ 4 ) 2

.

On the other hand, we find here with obvious computations, c|ξ|

00

|ψj (ξ)| ≤

(3.28)

(1 +

3 bξ 2 ) 2 (1

1

+ dξ 2 + d2 ξ 4 ) 2

.

Then with (3.27), (3.28), we find 00

1

3

|ψj (ξ)| c|ξ|(1 + bξ 2 ) 2 (1 + dξ 2 + d2 ξ 4 ) 2 . ≤ 0 2 |2 |ξ 2 − ξsj |ψj (ξ)|2

(3.29)

1

Hence if |ξ| ≥ 1 we have on Ω ∩ {N (ξsj , t− m )}c thanks to (3.29), 00

|ψj (ξ)| c|ξ|6 ≤ 0 |ξ − ξsj |2 |ξ + ξsj |2 |ψj (ξ)|2

(3.30)

1

so that when sgn(ξ) = sgn(ξsj ) we get on Ω ∩ {N (ξsj , t− m )}c , thanks to (3.30) and (3.24), 00

|ψj (ξ)| 0 |ψj (ξ)|2



c|ξ|6 |ξ − ξsj ≤

(3.31)

|2 (|ξ|

+ |ξsj |)2

2 4 14 c|ξ|6 c|ξ|4 = ≤ ct 5m + m ≤ ct 5m 2 2 2 |ξ − ξsj | |ξ| |ξ − ξsj |

and likewise when sgn(ξ) 6= sgn(ξsj ) we find with (3.30) and (3.25), 00

(3.32)

|ψj (ξ)| 4 2 14 c|ξ|6 c|ξ|4 ≤ = ≤ ct 5m + m ≤ ct 5m . 0 |ξ + ξsj |2 |ξ|2 |ξ + ξsj |2 |ψj (ξ)|2 1

Hence (3.31) and (3.32) give for |ξ| ≥ 1 and on Ω ∩ {N (ξsj , t− m )}c , 00

(3.33)

|ψj (ξ)| 14 = ct 5m . 0 |ψj (ξ)|2

On the other hand, if |ξ| ≤ 1 then thanks to (3.29) and proceeding as in (3.31) and 1 (3.32) we find on Ω ∩ {N (ξsj , t− m )}c , 00

(3.34)

|ψj (ξ)| 3 ≤ ct m . 0 |ψj (ξ)|2

This finishes the proof of the inequality (3.20) for the case (C1-1). The proof of (3.20) for the cases (C2-1) and (C4-1) and that of (3.21) for the cases (C1-2), (C2-2), C3 and (C4-2), follows the same lines as that of the proof of (3.20) for the case C1-1, using the R(y) and the Q(y) that correspond to these cases. This finishes the proof of the inequalities (3.20) and (3.21). 

34

GLOBAL EXISTENCE

11

Now, with the inequality (3.20) in hands and thanks to (3.19), we find for the cases (C1-1), (C2-1), and (C4-1), Z Z m−3 3 14 −1 m (3.35) |I2 | ≤ ct ( t dξ + t 5m dξ) ≤ ct− m . 1 {1≤|ξ|≤t 5m }

{|ξ|≤1}

Likewise for the cases (C1-2), (C2-2), C3 and (C4-2), we find in the same ways thanks to the inequalities (3.19) and (3.21) |I2 | ≤ ct−

(3.36)

m−7 m

.

Therefore, taking m = 4, we find for the cases (C1-1), (C2-1), (C4-1), thanks to (3.35), (3.18), (3.17), and (3.16), ∀t ≥ 1, (3.37)

1

1

|Sj (t)ϕj (x)| ≤ ct− 4 (|ϕj |1 + kϕj k6 ) ≤ c(1 + t)− 4 (|ϕj |1 + kϕj k6 )

and taking m = 8, we find for the cases (C1-2), (C2-2), C3 or (C4-2), thanks to (3.36), (3.18), (3.17), and (3.16),, 1

|Sj (t)ϕj (x)| ≤ c(1 + t)− 8 (|ϕj |1 + kϕj k6 ) ∀t ≥ 1.

(3.38)

(3.37), (3.38), with (3.15) give a part of the inequalities (3.2) and (3.3) in the lemma 3.2. In order to finish the proof of the lemma 3.2, we need the following lemmas which are also useful for the sequel. Lemma 3.4. Let h be such that (1 − L1 (R) ∩ L2 (R).

∂2 ˆ ∂ξ 2 )h(ξ)

∈ L2 (R). Then we have h(x) ∈

Lemma 3.5. Let for each i, j = 1, 2, pij be defined as above in section 2. Then for a, b, d, b2 , d2 satisfying C1, C2, C3 or C4, we have pij ∈ L1 (R) ∩ L2 (R). Proof. Proof of lemma 3.4 and lemma 3.5. We have thanks to the Plancherel theorem, Z Z 2 2 2 (1 + x ) |h(x)| dx = |F((1 + x2 )h(x))(ξ)|2 dξ R R Z ∂2 ˆ 2 (3.39) dξ ≤ c < ∞. = |(1 − 2 )h(ξ)| ∂ξ R This shows that h ∈ L2 (R). Moreover, with (3.39) and thanks to the Schwartz inequality, we get Z Z Z 1 1 (3.40) |h(x)|dx ≤ ( (1 + x2 )−2 dx) 2 ( (1 + x2 )2 |h(x)|2 dx) 2 ≤ c < ∞ R

R

R

that is h ∈ L1 (R) and the lemma 3.4 is proven. Now, thanks to the lemma 3.4 the definition of pij in the section 2, the proof of the lemma 3.5 follows immediately.  We are now able to finish the proof of the lemma 3.2. We begin by endding the proof of the inequalities (3.2) and (3.3) : Since from lemma 3.5 pij ∈ L1 (R) i, j = 1, 2, and thanks to the estimates of Sj (t), i, j = 1, 2, in (3.2) and the Young inequality, we find for the cases C1-1, C2-1, C4-1, |pij ∗ Sj (t)ϕj (x)|∞ (3.41)



|pij |1 |Sj (t)ϕj (x)|∞ 1

≤ c(1 + t)− 4 (|ϕj |1 + kϕj k6 ) ∀t ≥ 0,

35

´ GODEFROY AKMEL DE

12

which finishes the proof of the inequality (3.2). We finish the proof of the inequality (3.3) in the same manner. To prove the others inequalities (3.4), (3.5), (3.6), (3.7) of the lemma 3.2, we use the definitions of pij , hij , lij i, j = 1, 2, in the section 2, and we proceed as above and as in the proof of the lemma 2.2. This ends the proof of the lemma 3.2.  Proof. Proof of the theorem 3.1. Writing the solution (η, u) of (LP ) in its integral form as follows, (3.42) η(x, t)

= p11 ∗ [S1 (t)(l11 ∗ η0 + l12 ∗ u0 ) + S2 (t)(l11 ∗ η0 + l12 ∗ u0 )]

(3.43) u(x, t)

= p21 ∗ [S1 (t)(l11 ∗ η0 + l12 ∗ u0 ) − S2 (t)(l11 ∗ η0 + l12 ∗ u0 )].

the proof of the theorem 3.1 follows immediately from the inequalities of the lemma 3.2.  4. Global existence and decay for the solution of the NL system. Let us state now the results of the global existence and decay properties of the solution of the nonlinear system (1.1). Theorem 4.1. Let α > 5 and let η0 (x), J 2 η0 (x), Mj η0 (x), u0 (x), J 2 u0 (x), Mj u0 (x) ∈ ∂2 ∂4 ∂4 ∂2 H7 (R) ∩ L1 (R) where M1 = 1 − b ∂x 2 + b2 ∂x4 , M2 = 1 − d ∂x2 + d2 ∂x4 . Suppose that, for each j = 1, 2, |Mj η0 (x)|1 + |Mj u0 (x)|1 + kMj η0 (x)k6 + kMj u0 (x)k6 < δ, δ sufficiently small. Then if α > 5, the solution (η, u) of the Cauchy problem associated to (1.1)-(1.2) with a, b, d, b2 , d2 satisfying the conditions (C1-1), (C2-1) or (C4-1) verifies (4.1) (4.2)

|η(x, t)|∞ + |u(x, t)|∞ kη(x, t)k8 + ku(x, t)k8

1

≤ c(1 + t)− 4 , ∀t ≥ 0, ≤ c.

Otherwise if α > 9, then the solution (η, u) of the Cauchy problem associated to (1.1)-(1.2) with a, b, d, b2 , d2 satisfying the conditions (C1-2), (C2-2), (C3) or (C42) verifies (4.3) (4.4)

|η(x, t)|∞ + |u(x, t)|∞ kη(x, t)k8 + ku(x, t)k8

1

≤ c(1 + t)− 8 , ∀t ≥ 0, ≤ c.

Proof. In addition with the inequality (2.4) and those of the lemma 2.2, we need for the proof of the theorem 4.1, the following inequalities: From the definition of f1 , f2 given in (1.1)-(1.2), we find thanks to the Holder inequalities, (4.5)

α−1 2 |J 2 (f1 (η, u))|1 ≤ c(|η|∞ kuk2 kηk2 + |u|∞ ||η|α−2 ∞ kηk2 )

and (4.6)

2 α−1 2 |J 2 (f2 (η, u))|1 ≤ c|u|α−1 ∞ kuk2 + c|η|∞ kηk2 .

Now define the quantity 1

Q(t) = sup {(1 + τ ) 4 (|η(τ )|∞ + |u(τ )|∞ ) + kηk8 + kuk8 } 0≤τ ≤t

We will consider here the integral form of the nonlinear solution of (1.1)-(1.2) as given by (2.2)-(2.3) in the section 2 above. Therefore, taking the L∞ norm of the

36

GLOBAL EXISTENCE

13

solution (η, u) written in its integral form, and thanks to the inequalities in the lemma 3.2 of section 3, and the inequalities (4.5), (4.6), (2.4), we find for each j = 1, 2,, if a, b, c, d, b2 , d2 satisfy (C1-1), (C2-1), or (C4-1), (4.7)

|η(x, t)|∞ + |u(x, t)|∞

1

≤ c(1 + t)− 4 (|Mj η0 (x)|1 + kMj η0 (x)k6 + |Mj u0 (x)|1 + kMj u0 (x)k6 ) α+1

Z

t

1

(1 + (t − τ ))− 4 (1 + τ )−

+ cQ(t)

α−1 4

dτ.

0

But since for α > 5, Z t

1

(1 + (t − τ ))− 4 (1 + τ )−

α−1 4

1

dτ ≤ c(1 + t)− 4 ,

0

we deduce from (4.7) that for α > 5, when (1.1)-(1.2) is under the conditions (C1-1), (C2-1), or (C4-1), 1

(4.8) (1 + τ ) 4 (|η(τ )|∞ + |u(τ )|∞ ) ≤ c(|Mj η0 (x)|1 + kMj η0 (x)k6 + |Mj u0 (x)|1 + kMj u0 (x)k6 α+1

+ Q(t)

).

Furthermore, when (1.1)-(1.2) is under the conditions (C1-1), (C2-1), (C4-1), we get for α > 5 thanks to (2.4) and the inequalities of the lemma in section 2 above, (4.9)

kη(x, t)k8 + ku(x, t)k8

≤ c{kη0 (x)k9 + ku0 (x)k9 α+1

Z

t

(1 + τ )−

+Q(t)

α−1 4

dτ }

0 α+1

≤ c{kη0 (x)k9 + ku0 (x)k9 + Q(t)

}.

Then, thanks to (4.8), (4.9) and the definition of Q(t) above, we are lead for α > 5, to the inequality (4.10)

Q(t) ≤ c{|Mj η0 |1 + |Mj u0 |1 + kMj η0 k6 + kMj u0 k6 α+1

+ kMj u0 k6 + kη0 k9 + ku0 k9 + Q(t)

}.

Henceforth, thanks to the inequality (4.10), we find that, if (4.11)

|Mj η0 |1 + |Mj u0 |1 + kMj η0 k6 + kMj u0 k6 + kη0 k9 + ku0 k9 < δ

with δ small enough, then Q(t) is bounded. Indeed, the inequality (4.10) is satisfied if Q(t) ∈ [0, β1 ] ∪ [β2 , ∞[ with 0 < β1 < β2 < ∞ (since δ is small). Thereby, since by the definition of Q(t), Q(0) = |η0 |∞ + |u0 |∞ + kη0 k8 + ku0 k8 , we have thanks to the Sobolev embedding, H8 (R) ⊂ L∞ (R) and with (4.11), (4.12)

Q(0) ≤ c(kη0 k8 + ku0 k8 ) ≤ cδ.

Then, the continuity of Q(t) and the inequality (4.12) allow us to conclude that Q(t) remains bounded for δ small enough and for all t ≥ 0; otherwise, (4.12) would be contradicted. Thus we have obtained for α > 5, a bound of Q(t) and consequently, a bound and a decay estimate of the local solution which permit us to extend globally,

37

´ GODEFROY AKMEL DE

14

for α > 5 and for small initial values, the local solution of the system (1.1)-(1.2) under the conditions (C1-1), (C2-1) or (C4-1). On the other hand, following the same lines above, we find for α > 9, the same resultas above for the solution of (1.1)-(1.2) under the conditions (C1-2), (C2-2), (C3) or (C4-2). This finishes the proof of the theorem theorem 4.1  5. Blow up in finite time for the solution of the NL system. We consider here the system (1.1)-(1.2) under the condition C5-1 with f1 = 0 and a = −b = −d2 < 0. That is (5.1) (5.2)

ηt − bηxxt ut − duxxt + buxxxxt

= −ux + buxxx x, t ∈ R, = −ηx + bηxxx − (f2 (η, u))x + b(f2 (η, u))xxx

with η(x, 0) = η0 (x), u(x, 0) = u0 (x), η(∞, t) = ηx (∞, t) = ηxx (∞, t) = u(∞, t) = ux (∞, t) = uxx (∞, t) = 0 and where, f2 (η, u) = uα+1 or f2 (η, u) = η α+1 or 1 f2 (η, u) = uη + η 2 , α ≥ 1. 2 ∂2 ∂2 ∂4 Then, setting P1 = 1 − b ∂x 2 and P2 = 1 − d ∂x2 + b ∂x4 , (5.1)-(5.2) becomes (5.3) (5.4)

ηt ut

= −ux =

x, t ∈ R,

−P2−1 P1 (η

+ f2 (η, u))x .

We already know from the theorem 2.1 above the local existence of the solution of (5.3)-(5.4). We will prove now a blow up theorem of the solution to (5.3)-(5.4). Note that all the theorems in the sequel, work also, thanks to the symetry, for the solutions of (1.1)-(1.2) under the condions (C5-2). Before giving the blow-up theorems, let us prove the following needed lemmas. Lemma 5.1. If there exist functions u0 (x) ∈ Hs+1 , such that the initial values η(x, 0),

ηt (x, 0),

η(x, 0) = (w0 (x))x ,

w0 (x) ∈ Hs+2 ,

s >

u(x, 0), satisfy the relations

1 , 2

ηt (x, 0) = −(u0 (x))x ,

then for all t ∈ [0, T ], the solution (η, u) of the Cauchy problem associated to (5.3)(5.4) satisfies η(x, t) = (w(x, t))x , with a corresponding evolution of w(x, t), η(x, t) satisfying the system (5.5)

wt (x, t)

(5.6)

ut

= −u(x, t)

x, t ∈ R,

= −P2−1 P1 (η + f2 (η, u))x .

The couple of functions (w, u) belongs to C 1 ([0, T ]; Hs+2 ) × C 1 ([0, T ]; Hs+1 ) Proof. From (5.3) we find Z (5.7)

η(x, t) = η(x, 0) −

t

(u(x, s))x ds. 0

Rt The term η(x, 0) is an x−derivative by hypothesis and 0 ux ds is an x−derivative. Therefore, there exists a w(x, t) such that η(x, t) = wx . The second part of the lemma is proved by Theorem 2.1 above.  We need also, for the result of blow-up, the following lemma proved in (Levine, 1974).

38

GLOBAL EXISTENCE

Lemma 5.2. Suppose ψ(t), sastifying the inequality

15

t ≥ 0, is a positive, twice-differentiable fonction ψ ψ 00 ψ − (1 + γ)(ψ 0 )2 ≥ 0

where γ > 0. If ψ(0) > 0 and ψ 0 (0) > 0, then ψ(t) −→ ∞ as t −→ t1 ≤

ψ(0) ; (t1 is a positive γψ 0 (0)

constant). Let us give now the blow-up theorem. Theorem 5.3. (Blow-up for the solution of (5.3)-(5.4)). Suppose that there exists γ > 0 such that η(η + f2 (η, u)) ≤ 2(1 + 2γ)F2 (η, u)

(5.8) where F2 is such that

k X ∂F2 λj uj = η + λ0 f2 (u, η) + ∂η j=1

∂F2 = ν0 (η + f2 (η, u)) ∂u with ν0 ≥ 0, λj ≥ 0, ∀j, k ≥ 0 and where λ0 = 1 if f2 is not a purely function of u. Suppose also that the initial values η(x, 0), ηt (x, 0), u(x, 0), are chosen such that they satisfy • η(x, 0) = (w0 (x))x , ηt (x, 0) = −(u0 (x))x , for some u0 (x) ∈ Hs+1 , 1 Hs+2 , s > , 2 R 1 • E(0) = (u0 , P1−1 P2 u0 ) + R F2 (η0 , u0 )dx < 0. 2 Then, the solution (η, u) of (5.3)-(5.4) blows-up in finite time.

w0 (x) ∈

Proof. Proof of the theorem 5.3. For example: If f2 (η, u) = uα+1 then F2 (η, u) = 1 2 1 1 1 η + uη + uα+2 , or if f2 (η, u) = η α+1 then F2 (η, u) = η 2 + η α+2 , or 2 α+2 2 α+2 1 1 1 1 1 if f2 (η, u) = η 2 + uη then F2 (η, u) = η 2 + uη + ηu2 + η 2 u + η 3 . 2 2 2 2 6 To begin the proof, let 1 E(t) = (u, P1−1 P2 u) + 2

Z F2 (η, u)dx. R

We claim that (5.9)

1 E(t) = E(0) = (u0 , P1−1 P2 u0 ) + 2

Z F2 (η0 , u0 )dx < 0. R

39

´ GODEFROY AKMEL DE

16

Indeed, thanks to (5.3), (5.4) and integrations by parts, we have

0

(5.10) E (t)

Z

∂F2 ∂F2 (η, u) + ut (η, u)}dx ∂η ∂u R Z ∂F2 = −((η + f2 (η, u))x , u) − (η, u)dx ux ∂η R Z ∂F2 (η, u)dx − P2−1 P1 (η + f2 (η, u))x ∂u R Z k X = −((η + f2 (η, u))x , u) − ux (η + λ0 f2 (u, η) + λj uj )dx =

(ut , P1−1 P2 u)

+

{ηt

R

Z

j=1

P2−1 P1 (η + f2 (η, u))x (η + f2 (η, u))dx = 0.

−ν0 R

Now, define ψ(t) as follows.

ψ(t) = (w, P1−1 P2 w) + β0 (t + t0 )2

where β0 and t0 are non-negatives constants to be defined later. Suppose that the maximal time of existence is infinite. A contradiction will be obtained by lemma 5.2. The condition necessary to apply the lemma 5.2 is that ψ 00 ψ − (1 + γ)(ψ 0 )2 ≥ 0. We have by hypothesises of theorem 5.3, and thanks to the lemma 5.1 and integration by parts,

ψ 0 (t) = 2(wt , P1−1 P2 w) + 2β0 (t + t0 ) = 2(−u, P1−1 P2 w) + 2β0 (t + t0 ),

and

ψ 00 (t)

=

2(u, P1−1 P2 u) + 2(P2−1 P1 (η + f2 (η, u))x , P1−1 P2 w) + 2β0

=

2(u, P1−1 P2 u) − 2(η + f2 (η, u), η) + 2β0 Z −1 2(u, P1 P2 u) − 2 [η(η + f2 (η, u)) − (2 + 4γ)F2 (η, u)]dx

=

R

−2(2 + 4γ)E(t) + (2 + 4γ)(u, P1−1 P2 u) + 2β0 ≥ 4(1 + γ)(u, P1−1 P2 u) − 2(2 + 4γ)E(t) + 2β0 .

40

GLOBAL EXISTENCE

17

It follows that, since ψ(t) ≥ 0, and using the Schwartz inequality and the Parseval formula, ψ 00 ψ − (1 + γ)(ψ 0 )2

≥ [4(1 + γ)(u, P1−1 P2 u) − 2(2 + 4γ)E(t) + 2β0 ]ψ(t) −4(1 + γ)[(−u, P1−1 P2 w) + β0 (t + t0 )]2 ≥ [4(1 + γ)(u, P1−1 P2 u) − 2(2 + 4γ)E(t) + 2β0 ]ψ(t) −4(1 + γ)[(u, P1−1 P2 u)(w, P1−1 P2 w) 1

1

+2(u, P1−1 P2 u) 2 (w, P1−1 P2 w) 2 β0 (t + t0 )] −4(1 + γ)β0 2 (t + t0 )2 ≥ [4(1 + γ)(u, P1−1 P2 u) − 2(2 + 4γ)E(t) + 2β0 ]ψ(t) −4(1 + γ)[(u, P1−1 P2 u)(w, P1−1 P2 w) 1

1

+2(u, P1−1 P2 u) 2 (w, P1−1 P2 w) 2 β0 (t + t0 )] −4(1 + γ)β0 ψ(t) + 4(1 + γ)β0 (w, P1−1 P2 w) ≥

−2(1 + 2γ)[2E(t) + β0 ]ψ(t) + 4(1 + γ)(u, P1−1 P2 u)β0 (t + t0 )2 1

1

−8(1 + γ)(u, P1−1 P2 u) 2 (w, P1−1 P2 w) 2 β0 (t + t0 ) +4(1 + γ)β0 (w, P1−1 P2 w) ≥ −2(1 + 2γ)[2E(t) + β0 ]ψ(t) +4(1 + γ)(u, P1−1 P2 u)β0 (t + t0 )2 − 4(1 + γ)(u, P1−1 P2 u)β0 (t + t0 )2 −4(1 + γ)β0 (w, P1−1 P2 w) + 4(1 + γ)β0 (w, P1−1 P2 w) ≥ −2(1 + 2γ)[2E(t) + β0 ]ψ(t). Thereby, since E(t) = E(0) < 0, it follows by taking β0 = −2E(0), that ψ 00 ψ − (1 + γ)(ψ 0 )2 ≥ 0. Also ψ 0 (0) = 2(−u0 , P1−1 P2 w0 ) + 2β0 t0 > 0 if t0 is sufficently large. Thus, by lemma 5.2, ψ(t) becomes infinite at a time T1 at most equal to ψ(0) < ∞. Therefore, we have a contradiction with the fact that the tβ0 = γψ 0 (0) maximal time of existence is infinite. Henceforth, there is blow up in finite time, and the maximal time of existence is finite. 

References 1. (2007)M.Chen & O.Goubet, (2007), Long-time asymptotic behavior of dissipative Boussinesq systems, JDiscrete and continuous dynamical systems - serie A, 7, Num 3. 2. (2004)J.L.Bona, M.Chen, J.C.Saut, (2004). Boussinesq equations and others systems for smallamplitude long waves in nonlinear dispersive media: II. The nonlinear theory, Nonlinearity 17, 925-952. 3. (1974) H. Levin, (1974). Instability and non-existance of global solutions to non-linear wave equations of the form P utt = −Au + F (u), Trans. Amer. Math. Soc., 92, 1-21.

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 41-91,2010, COPYRIGHT 2010 EUDOXUS PRESS, 41 LLC

Caputo Fractional Multivariate Opial type inequalities on spherical shells George A. Anastassiou Department of Mathematical Sciences University of Memphis Memphis, TN 38152 U.S.A. [email protected] Abstract Here is introduced the concept of Caputo fractional radial derivative for a function defined on a spherical shell. Using polar coordinates we are able to derive multivariate Opial type inequalities over a spherical shell of RN , N ≥ 2, by studying the topic in all possibilities. Our results involve one, two or more functions. We produce many univariate Caputo fractional Opial type inequalities several of these used to establish results on the shell. We give application to prove uniqueness of solution of a general partial differential equation on the shell. Also we apply our results for Riemann-Liouville fractional derivatives. 2000 Mathematics Subject Classification: 26A33, 26D10, 26D15. Key Words and Phrases: Opial inequality, fractional inequality, Caputo fractional derivative, radial derivative, univariate and multivariate inequality .

1

Introduction

This work is inspired by articles of Opial [22], Bessack [13], and AnastassiouKoliha-Pecaric [11], [12], and Anastassiou [9], [10]. We would like to mention Theorem A. (Opial [22], 1960) Let c > 0 and y (x) be real, continuously differentiable on [0, c] , with y (0) = y (c) = 0.Then Z c Z c c 0 2 |y (x) y 0 (x)| dx ≤ (y (x)) dx. 4 0 0 Equality holds for the function y (x) = x on [0, c/2] 1

42

ANASTASSIOU: FRACTIONAL INEQUALITIES

and y (x) = c − x on [c/2, c] . The next result implies Theorem A and is very useful to applications. Also it is our main motivation. Theorem B. (Bessack [13], 1962) Let b > 0. If y (x) is real, continuously differentiable on [0, b] , and y (0) = 0,then Z 0

b

b |y (x) y (x)| dx ≤ 2 0

Z

b

2

(y 0 (x)) dx.

0

Equality holds only for y = mx, where m is a constant. Opial type inequalities usually find applications in establishing uniqueness of solution of initial value problems for differential equations and their systems, see Willett [27]. In this article we present a series of various Caputo fractional multivariate Opial type inequalities over spherical shells. To achieve our goal we use polar coordinates, and we introduce and use the Caputo fractional radial derivative. We work on the spherical shell, and not on the ball, because a radial derivative can not be defined at zero. So, we reduce the problem to a univariate one. Therefore we derive and use a large array of univariate Opial type inequalities involving Caputo fractional derivatives; these are Caputo fractional derivatives defined at arbitrary anchor point a ∈ R. In our results we involve one, two, or several functions. But first we need to develop an extensive background in two parts, then follow the main results. At the end we give application proving uniqueness of solution for a general PDE initial value problem. Also we reestablish our results by involving Riemann-Liouville fractional derivatives defined at an arbitrary anchor point. In this article to build our background regarding Caputo fractional derivative we use the excellent monograph [17]. Caputo derivative was introduced in 1967, see [14], also see [15], [16]. It happens that the Riemann-Liouville fractional derivative has some disadvantages when is to model real-world phenomena with fractional differential equations. One reason is that the initial conditions there involve fractional derivatives that are difficult to connect with actual data, etc. However Caputo fractional derivative modelling involves initial conditions that are described by ordinary derivatives, much easier to write out of real world data. So more and more in recent years the Caputo version is usually preferred when physical models are described, because the physical interpretation of the prescribed data is clear, and therefore it is in general easier possible to gather these data, e.g. by appropriate measurements. Also from the pure mathematics side there are reasons to prefer recently more and more the Caputo fractional derivative.

2

ANASTASSIOU: FRACTIONAL INEQUALITIES

2

43

Background-I

Here we follow [17]. We start with Definition 1. Let ν ≥ 0, the operator Jaν , defined on L1 [a, b] by Z x 1 ν−1 (x − t) f (t) dt Jaν f (x) := Γ (ν) a

(1)

for a ≤ x ≤ b, is called the Riemann-Liouville fractional integral operator of order ν. For ν = 0, we set Ja0 := I, the identity operator. Here Γ stands for the gamma function. Theorem 2. ([17]) Let f ∈ L1 [a, b] , ν > 0. Then, the integral Jaν f (x) exists for almost every x ∈ [a, b] . Moreover, Jaν f ∈ L1 ([a, b]) . We need Theorem 3. ([17]) Let m, n ≥ 0, Φ ∈ L1 ([a, b]) . Then Jam Jan Φ = Jam+n Φ (2) holds almost everywhere on [a, b] . If additionally Φ ∈ C ([a, b]) or m + n ≥ 1, then the identity holds everywhere on [a, b] . We give Definition 4. ([17]) Let ν ∈ R+ and m = dνe , d·e is the ceiling of number. The operator Daν , defined by d , (3) dx is called the Riemann-Liouville fractional differential operator of order ν. For ν = 0, we set Da0 := I, the identity operator. If ν ∈ N then Daν f = f (ν) , the ordinary ν order derivative. Next we give Definition 5. (p.37, [17]) Let ν ≥ 0 and n := dνe , a ∈ R. Then, we define the operator ˆ aν f := Jan−ν f (n) , D (4) Daν f := Dm Jam−ν f,

D :=

whenever f (n) ∈ L1 ([a, b]) . Also we need Theorem 6. (p.37, [17]) Let ν ≥ 0, n := dνe . Moreover assume that f ∈ AC n ([a, b]) (space of functions with absolutely continuous (n − 1) −st derivative). Then ˆ aν f = Daν (f − Tn−1 (f ; a)) , D where Tn−1 (f ; a) (x) :=

n−1 X k=0

a.e. on [a, b] ,

f (k) (a) k (x − a) , k! 3

x ∈ [a, b] ,

(5)

(6)

44

ANASTASSIOU: FRACTIONAL INEQUALITIES

is the Taylor polynomial of degree n − 1 of f, centered at a. Next we give the definition of Caputo fractional derivative ([17]). Definition 7. (p.38, [17]) Assume that f is such that Daν (f − Tn−1 (f ; a)) (x) exists for some x ∈ [a, b] . Then we define the Caputo fractional derivative by ν D∗a f (x) := Daν (f − Tn−1 (f ; a)) (x) .

(7)

So the above definition applies to all points x ∈ [a, b] : Daν (f − Tn−1 (f ; a)) (x) ∈ R. We have Corollary 8. Let ν ≥ 0, n := dνe , f ∈ AC n ([a, b]) . Then the Caputo fractional derivative Z x 1 n−ν−1 (n) ν (x − t) f (t) dt (8) D∗a f (x) = Γ (n − ν) a exists almost everywhere for x in [a, b] . We have ν f exists iff Corollary 9. Let ν ≥ 0, n := dνe , f ∈ AC n ([a, b]) . Then, D∗a ν Da f exists. Proof. By linearity of Daν operator and assumption.  We need Lemma 10. ([17]) Let ν ≥ 0, n = dνe . Assume that f is such that both ν f and Daν f exist. D∗a Then, ν D∗a f (x) = Daν f (x) −

n−1 X k=0

f (k) (a) k−ν (x − a) . Γ (k − ν + 1)

(9)

Lemma 11. ([17]) All as in Lemma 10. Additionally assume that f (k) (a) = 0 for k = 0, 1, . . . , n − 1. Then, ν D∗a f = Daν f.

(10)

In conclusion ν Corollary 12. Let ν ≥ 0, n := dνe , f ∈ AC n ([a, b]) , D∗a f exists or Daν f (k) exists, and f (a) = 0, k = 0, 1, . . . , n − 1. Then ν Daν f = D∗a f.

(11)

We need the following Taylor-Caputo formula Theorem 13. (p.40, [17]) Let ν ≥ 0, n := dνe , f ∈ AC n ([a, b]) . Then f (x) =

n−1 X k=0

f (k) (a) k ν (x − a) + Jaν D∗a f (x) , k!

∀x ∈ [a, b] . 4

(12)

ANASTASSIOU: FRACTIONAL INEQUALITIES

45

ν Clearly here Jaν D∗a f ∈ AC n ([a, b]) . Corollary 14. Let ν ≥ 0, n := dνe , f ∈ AC n ([a, b]) , and f (k) (a) = 0, k = 0, 1, . . . , n − 1. Then Z x 1 ν−1 ν ν (x − t) D∗a f (t) dt. (13) f (x) = Jaν D∗a f (x) = Γ (ν) a

We need Lemma 15. Let ν ≥ γ +1, γ ≥ 0. Call n := dνe , m := dγe . Then n−m ≥ 1, i.e. m ≤ n − 1. Proof. Clearly ν ≥ 1 and ν > γ, ν − γ ≥ 1. By γ + 1 > m we get ν > m, and n > m, that is ν − m > 0 and n − m > 0. We see that ν ≥ γ + 1 ≥ [γ] + 1, where [·] is the integral part. Thus ν ≥ ([γ] + 1) ∈ N and ν ≥ [ν] ≥ [γ] + 1. Therefore [ν] − [γ] ≥ 1, (14) to be used next. We distinguish the following cases. i) Let ν, γ ∈ / N, then dνe = [ν] + 1, dγe = [γ] + 1. By (14) we get ([ν] + 1) − ([γ] + 1) ≥ 1. Hence n − m ≥ 1. ii) Let ν, γ ∈ N, then [ν] = dνe = ν, [γ] = dγe = γ. So by (14) n − m ≥ 1. iii) Let ν ∈ / N, γ ∈ N. Then n = dνe = [ν] + 1, m = dγe = [γ] = γ. Hence by (14) we have (dνe − 1) − m ≥ 1, and dνe − m ≥ 2 > 1. Hence n − m > 1. iv) Let ν ∈ N, γ ∈ / N. Then 1 + γ < dγe + 1 = dγ + 1e , and 1 + γ ≤ ν ∈ N by assumption. Therefore dγe + 1 ≤ ν, and ν − dγe ≥ 1. So that again n − m ≥ 1. Claim is proved in all cases.  We present the representation theorem. Theorem 16. Let ν ≥ γ + 1, γ ≥ 0. Call n := dνe , m := dγe . Assume f ∈ ν AC n ([a, b]) , such that f (k) (a) = 0, k = 0, 1, . . . , n − 1, and D∗a f ∈ L∞ (a, b) . Then γ γ D∗a f ∈ C ([a, b]) , D∗a f (x) = Jam−γ f (m) (x) , (15) and γ D∗a f (x) =

1 Γ (ν − γ)

Z

x

ν−γ−1

(x − t)

ν D∗a f (t) dt,

(16)

a

∀x ∈ [a, b] . Proof. If γ = 0 then (16) collapses to (13) , also (15) is clear. So we assume γ > 0. By Lemma 15 we have m ≤ n − 1. By the assumption f ∈ AC n ([a, b]) we get that f ∈ C n−1 ([a, b]) and thus f ∈ C m ([a, b]) . γ By Lemma 3.7, p.41 of [17] we get that D∗a f = Jam−γ f (m) ∈ C ([a, b]) and γ / N. Clearly the last statement is true also when γ ∈ N, so D∗a f (a) = 0, for γ ∈ proving (15) and first claim. 5

46

ANASTASSIOU: FRACTIONAL INEQUALITIES

To remind, we have ν − m > 0, and ν − 1 > 0 by γ > 0. Using Γ (p + 1) = pΓ (p) , p > 0, (13) and Theorem 7 of [9], we obtain ν f (m) (x) = Jaν−m D∗a f (x) ,

∀x ∈ [a, b] .

(17)

Therefore we get (17)

γ ν D∗a f (x) = Jam−γ f (m) (x) = Jam−γ Jaν−m D∗a f (x)

(by Theorem 2.2, p.14 of [17], and ν − γ ≥ 1) ν = Jaν−γ D∗a f (x) ,

∀x ∈ [a, b] .

That is proving (16) .  We also give the representation theorem. Theorem 17. Let ν ≥ γ + 1, γ ≥ 0. Call n := dνe , m := dγe . Let f ∈ AC n ([a, b]) , such that f (k) (a) = 0, k = 0, 1, . . . , n − 1. Assume there exists Daν f (x) ∈ R, ∀x ∈ [a, b] , and Daν f ∈ L∞ (a, b) . Then Daγ f ∈ C ([a, b]) , Daγ f (x) = Jam−γ f (m) (x) ,

(18)

∀x ∈ [a, b] , Daγ f (x) =

1 Γ (ν − γ)

Z

x

ν−γ−1

(x − t)

Daν f (t) dt,

(19)

a

∀x ∈ [a, b] . ν f (x) ∈ R, and that Proof. By Corollaries 9 and 12 we get existing D∗a ν ν (x) = Da f (x) , ∀x ∈ [a, b] . That is D∗a f ∈ L∞ (a, b) and by (16) we have Z x 1 ν−γ−1 γ D∗a (x − t) Daν f (t) dt, ∀x ∈ [a, b] . f (x) = (20) Γ (ν − γ) a

ν D∗a f

γ γ Since D∗a f ∈ C ([a, b]) we get D∗a f (x) ∈ R, ∀x ∈ [a, b] . And since f ∈ m C ([a, b]) then f ∈ AC ([a, b]) . By Corollary 9 Daγ f exists. Also f (k) (a) = 0, γ for k = 0, 1, . . . , m − 1. Thus by Corollary 12 we obtain Daγ f (x) = D∗a f (x) , ∀x ∈ [a, b] . Now by (20) we have established (19) .  m

3 3.1

Main Results Results involving one function

We present Theorem 18. Let ν ≥ γ + 1, γ ≥ 0. Call n := dνe and assume f ∈ ν AC n ([a, b]) such that f (k) (a) = 0, k = 0, 1, . . . , n − 1, and D∗a f ∈ L∞ (a, b) . 1 1 Let p, q > 1 such that p + q = 1, a ≤ x ≤ b. 6

ANASTASSIOU: FRACTIONAL INEQUALITIES

Then

47

x

Z

γ ν |D∗a f (ω)| |(D∗a f ) (ω)| dω ≤

a

(x − a)

pν−pγ−p+2 p

√  1/p q 2 Γ (ν − γ) ((pν − pγ − p + 1) (pν − pγ − p + 2)) 2/q Z x q ν |D∗a f (ω)| dω . ·

(21)

a

Proof. Similar to Theorem 25.2, p.545, [2], and Theorem 2.1 of [11].  A related extreme case comes next. Proposition 19. All as in Theorem 18, but with p = 1 and q = ∞, we find Z x γ ν |D∗a f (ω)| |D∗a f (ω)| dω ≤ a ν−γ+1  2 (x − a) ν kD∗a f k∞,(a,x) . Γ (ν − γ + 2)

(22)

Proof. Similar to Proposition 25.1, p.547, [2].  The converse of (21) follows. Theorem 20. Let ν ≥ γ + 1, γ ≥ 0. Call n := dνe and assume f ∈ ν f, Dν1 f ∈ AC n ([a, b]) such that f (k) (a) = 0, k = 0, 1, . . . , n − 1, and D∗a ∗a ν f is of fixed sign a.e. in [a, b] . Let p, q such that L∞ (a, b) . Suppose that D∗a 0 < p < 1, q < 0 and p1 + 1q = 1, a ≤ x ≤ b. Then x

Z

γ ν |D∗a f (ω)| |D∗a f (ω)| dω ≥

a

(x − a)

pν−pγ−p+2 p

√  1/p q 2 Γ (ν − γ) ((pν − pγ − p + 1) (pν − pγ − p + 2)) Z x 2/q q ν · |D∗a f (ω)| dω .

(23)

a

Proof. Similar to Theorem 25.3, p.547, [2], and Theorem 2.3 of [11].  We give Theorem 21. Let ν ≥ 2, k ≥ 0, ν ≥ k + 2. Call n := dνe and assume ν f ∈ L∞ (a, b) . f ∈ AC n ([a, b]) such that f (j) (a) = 0, j = 0, 1, . . . , n−1, and D∗a 1 1 Let p, q > 1 such that p + q = 1, a ≤ x ≤ b. Then Z

x

k k+1 D∗a f (ω) D∗a f (ω) dω ≤

a

7

48

ANASTASSIOU: FRACTIONAL INEQUALITIES

(x − a)

2(pν−pk−p+1) p

2

2/p

2 (Γ (ν − k)) (pν − pk − p + 1) 2/q Z x q ν . |D∗a f (ω)| dω ·

(24)

a

Proof. Similar to Theorem 25.4, p.549, [2], and Theorem 2.4 of [11].  The extreme case follows. Proposition 22. Under the assumptions of Theorem 21 when p = 1, q = ∞ we find Z x k+1 k D∗a f (ω) D∗a f (ω) dω ≤ a 2(ν−k)

(x − a)



ν kD∗a f k∞,(a,x)

2 .

2

2 (Γ (ν − k + 1))

(25)

Proof. Similar to Proposition 25.2, p.551, [2].  We give the related converse result. Theorem 23. Let ν ≥ 2, k ≥ 0, ν ≥ k + 2. Call n := dνe . Assume ν f ∈ AC n ([a, b]) such that f (j) (a) = 0, j = 0, 1, . . . , n − 1, and D∗a f, Dν1 f ∈ ∗a ν f is of fixed sign a.e. in [a, b] . Let p, q such that L∞ (a, b) . Suppose that D∗a 0 < p < 1, q < 0 and p1 + 1q = 1, a ≤ x ≤ b. Then Z x k k+1 D∗a f (ω) D∗a f (ω) dω ≥ a

(x − a)

2(pν−pk−p+1) p

2

2/p

2 (Γ (ν − k)) (pν − pk − p + 1) Z x 2/q q ν · |D∗a f (ω)| dω .

(26)

a

Proof. Similar to Theorem 25.5, p.553 of [2].  Next we present Theorem 24. Let γi ≥ 0, ν ≥ 1, ν − γi ≥ 1; i = 1, . . . , l, n := dνe , and ν f ∈ AC n ([a, b]) such that f (k) (a) = 0, k = 0, 1, . . . , n−1, and D∗a f ∈ L∞ (a, b) . Here a ≤ x ≤ b; q1 (x) , q2 (x) continuous functions on [a, b] such that q1 (x) ≥ 0, Pl q2 (x) > 0 on [a, b] , and ri > 0 : i=1 ri = r. Let s1 , s01 > 1 : s11 + s10 = 1 and s2 , s02 > 1 : s12 + Denote by

1 s02

1

= 1, and p > s2 .

Z Q1 (x) :=

x

s01

(q1 (ω)) dω a

8

1/s01 (27)

ANASTASSIOU: FRACTIONAL INEQUALITIES

and x

Z

−s02 /p

(q2 (ω))

Q2 (x) :=

49

r/s02 dω

,

(28)

a

σ :=

p − s2 . ps2

(29)

Then Z

l Y

x

q1 (ω) a

r

γi |D∗a f (ω)| i dω ≤ Q1 (x) Q2 (x)

i=1 l  Y

 σ ri σ r r σ (Γ (ν − γi )) i (ν − γi − 1 + σ) i i=1 Pl  (ν−γi −1)ri +σr+ s1 1 (x − a) i=1 ·   1/s1 Pl i=1 (ν − γi − 1) ri s1 + rs1 σ + 1 Z ·

x

p

ν q2 (ω) |D∗a f (ω)| dω

r/p .

(30)

a

Proof. Similar to Theorem 26.1, p.567, [2], and Theorem 2.1 of [12].  The counterpart of last theorem follows. Theorem 25. Let γi ≥ 0, ν ≥ 1, ν − γi ≥ 1; i = 1, . . . , l, n := dνe , and ν f, Dν1 f ∈ f ∈ AC n ([a, b]) such that f (k) (a) = 0, k = 0, 1, . . . , n − 1, and D∗a ∗a L∞ (a, b) . Here a ≤ x ≤ b; q1 (x) , q2 (x) > 0 continuous functions on [a, b] and Pl ri > 0 : i=1 ri = r. Let 0 < s1 , s2 < 1 and s01 , s02 < 0 such that s11 + s10 = 1, 1 s2

+

1 s02

ν f (t) is of fixed sign a.e. in [a, b] . Denote = 1. Assume that D∗a x

Z Q1 (x) :=

1

1/s01

s0

(q1 (ω)) 1 dω

,

(31)

a

Z

x

Q2 (x) :=

−s02

(q2 (ω))

r/s02 dω

.

(32)

a

Set λ :=

s1 s2 . s1 s2 − 1

(33)

Then Z

x

q1 (ω) a

l Y

! γi |D∗a f

i=1

9

ri

(ω)|

dω ≥

50

ANASTASSIOU: FRACTIONAL INEQUALITIES

Q1 (x) Q2 (x)



( Ql

ri

(Γ (ν − γi )) ((ν − γi − 1) s22 s1 + 1)

i=1

ri s2 2 s1

)

Pl r (ν−γi −1)s1 +s−2 2 ))+1}/s1 (x − a){( i=1 i ( · n o1/s1  Pl −2 r (ν − γ − 1) s + s + 1 i i 1 2 i=1 x

Z

q2λs2

·

ν (ω) |D∗a f

λs2

(ω)|

r/λs2 .



(34)

a

Proof. Similar to Theorem 26.2, p.570, [2], and Theorem 2.3 of [12].  A related extreme case comes next for p = 1 and q = ∞. Theorem 26. Let ν ≥ γi + 1, γi ≥ 0; i = 1, . . . , l. Call n := dνe and assume ν f ∈ AC n ([a, b]) such that f (k) (a) = 0, k = 0, 1, . . . , n−1, and D∗a f ∈ L∞ (a, b) . Pl Here a ≤ x ≤ b, with 0 ≤ q˜ (ω) ∈ L∞ (a, b) and ri > 0 : i=1 ri = r. Then x

Z

q˜ (ω) a

l Y

r

γi f (ω)| i dω ≤ |D∗a

i=1

r    ν  k˜ f k∞,(a,x)  q k∞,(a,x) kD∗a . Ql ri   i=1 (Γ (ν − γi + 1))   P  (x − a)rν− li=1 ri γi +1    .  rν − Pl r γ + 1 

(35)

i=1 i i

Proof. Similar to Proposition 26.1 of [2], p.573 and Theorem 2.2 of [12].  We continue with the interesting Theorem 27. Let k ≥ 0, γ ≥ 1, ν ≥ 2, n := dνe , such that ν − γ ≥ 1, γ − k ≥ 1, and f ∈ AC n ([a, b]) such that f (j) (a) = 0, j = 0, 1, . . . , n − 1, and ν D∗a f ∈ L∞ (a, b) . Here a ≤ x ≤ b, p, q > 1 : p1 + 1q = 1. Then x

Z

k γ |D∗a f (ω)| D∗a f (ω) dω ≤

a 2ν−k−γ−1+ q2 ) 2−1/p (x − a)( 1/q

Γ (ν − k) Γ (ν − γ + 1) ((ν − γ) q + 1) 2/p Rx ν p |D∗a f (ω)| dω a · . 1/q (2νq − kq − γq − q + 2)

Proof. Similar to Theorem 26.3, p.574, [2], and Theorem 2.5 of [12]. 10

(36)



ANASTASSIOU: FRACTIONAL INEQUALITIES

51

We give Theorem 28. Let ν ≥ γi + 1, γi ≥ 0, i = 1, . . . , k ∈ N − {1} , n := dνe . ν Assume f ∈ AC n ([a, b]) such that f (j) (a) = 0, j = 0, 1, . . . , n − 1, and D∗a f∈ Pk 1 1 L∞ (a, b) . Here a ≤ x ≤ b, γ := i=1 γi . Let p, q > 1 such that p + q = 1. ν Furthermore, suppose that |D∗a f (t)| is decreasing on [a, x] . Then Z

k xY

γi |D∗a f (ω)| dω ≤

a i=1 1+kνp−γp ) p p (x − a)(

Qk

i=1

1/p

Γ (ν − γi ) (kνp − γp − kp + 1) R 1/q x kq ν |D f (t)| dt ∗a a · . (kνp − γp − kp + p + 1)

Proof. Similar to Theorem 26.6, p.581 of [2], and Theorem 2.6 of [12]. The extreme case follows Theorem 29. All as in Theorem 28, but p = 1, q = ∞. Then Z

k xY

(37)



γi |D∗a f (ω)| dω ≤

a i=1 kν−γ+1

(x − a) · Q

k i=1



ν kD∗a f k∞,(a,x)

k

 . Γ (ν − γi ) (kν − γ − k + 1) (kν − γ + 1)

(38)

Proof. Similar to Proposition 26.4, p.582 of [2], and Theorem 2.7 of [12]. 

3.2

Results involving two functions

We present Theorem 30. Let ν ≥ γi + 1, γi ≥ 0, i = 1, 2, n := dνe , and f1 , f2 ∈ (j) (j) AC n ([a, b]) such that f1 (a) = f2 (a) = 0, j = 0, 1, . . . , n − 1, a ≤ x ≤ b. 1 , q (t) ∈ L∞ (a, b) . Consider also p (t) > 0 and q (t) ≥ 0, with all p (t) , p(t) ν Further assume D∗a fi ∈ L∞ (a, b) , i = 1, 2. Let λν > 0 and λα , λβ ≥ 0 such that λν < p, where p > 1. Set Z ω (ν−γk −1)p − 1 Pk (ω) := (ω − t) p−1 (p (t)) (p−1) dt, (39) a

11

52

ANASTASSIOU: FRACTIONAL INEQUALITIES

k = 1, 2, a ≤ x ≤ b; λ p−1 −λ /p λ p−1 q (ω) (P1 (ω)) α ( p ) (P2 (ω)) β ( p ) (p (ω)) ν

A (ω) :=

λα

(Γ (ν − γ1 )) x

Z

(A (ω))

A0 (x) :=

λβ

(Γ (ν − γ2 ))

,

(40)

ν  p−λ p

p p−λν



,

(41)

if λα + λν ≤ p, if λα + λν ≥ p.

(42)

a

and

( δ1 :=

21−(

λα +λν p

),

1,

If λβ = 0, we obtain that Z x h λ λ γ1 ν q (ω) |D∗a f1 (ω)| α |D∗a f1 (ω)| ν + a λα

λν

ν |D∗a f2 (ω)|

γ1 f2 (ω)| |D∗a

A0 (x) |λβ =0 x

Z

ν p (ω) [|D∗a f1





p

(ω)| +

dω ≤

(43)

λν /p

λν λα + λ ν ν |D∗a f2

i

δ1

p

ν ( λα +λ ) p

.

(ω)| ] dω

a

Proof. Similar to Theorem 2 of [3] and Theorem 4 of [9]. It follows the counterpart of the last theorem. Theorem 31. All here as in Theorem 30. Denote  λ /λ 2 β ν − 1, if λβ ≥ λν , δ3 := 1, if λβ ≤ λν .



(44)

If λα = 0, then Z

x

h λ λ γ2 ν f2 (ω)| β |D∗a f1 (ω)| ν + q (ω) |D∗a

a λβ

γ2 |D∗a f1 (ω)|

(A0 (x) |λα =0 ) 2 Z

x

λν

ν |D∗a f2 (ω)|

p−λν p



p

λν λβ + λν

i

dω ≤

λν /p

p

all a ≤ x ≤ b. 12

λ /p

δ3 ν

ν ν p (ω) [|D∗a f1 (ω)| + |D∗a f2 (ω)| ] dω

a

(45)



 λν +λβ  p

,

ANASTASSIOU: FRACTIONAL INEQUALITIES

53

Proof. Similar to Theorem 3 of [3] and Theorem 5 of [9]. The complete case λα , λβ 6= 0 follows. Theorem 32. All here as in Theorem 30. Denote (  λα +λβ  − 1, if λα + λβ ≥ λν , 2 λν γ˜1 := 1, if λα + λβ ≤ λν ,



(46)

and ( γ˜2 := Then

x

Z

if λα + λβ + λν ≥ p,

 λα1,  +λβ +λν

1−

2

(47)

if λα + λβ + λν ≤ p.

,

p

h λ λ λ γ1 γ2 ν q (ω) |D∗a f1 (ω)| α |D∗a f2 (ω)| β |D∗a f1 (ω)| ν +

a λβ

γ2 |D∗a f1 (ω)|

 A0 (x)

λα

γ1 |D∗a f2 (ω)|

λν (λα + λβ ) (λα + λβ + λν )

Z ·

x ν p (ω) (|D∗a f1

p

(ω)| +

λν

ν |D∗a f2 (ω)|

λν /p h

ν |D∗a f2

i

λλαν /p γ˜2 + 2 

p

dω ≤ p−λν p

(48) λν /p

(˜ γ1 λβ )

i

 λα +λβ +λν  p

(ω)| ) dω

,

a

all a ≤ x ≤ b. Proof. As Theorem 4 of [3], and Theorem 6 of [9].  We continue with the special important case Theorem 33. Let ν ≥ γ1 + 2, γ1 ≥ 0, n := dνe and f1 , f2 ∈ AC n ([a, b]) (j) (j) such that f1 (a) = f2 (a) = 0, j = 0, 1, . . . , n − 1, a ≤ x ≤ b. Consider 1 also p (t) > 0 and q (t) ≥ 0, with all p (t) , p(t) , q (t) ∈ L∞ (a, b) . Furthermore ν assume D∗a fi ∈ L∞ (a, b) , i = 1, 2. Let λα ≥ 0, 0 < λα+1 < 1, and p > 1. Denote  λ /λ  2 α α+1 − 1, if λα ≥ λα+1 , θ3 := , (49) 1, if λα ≤ λα+1 , x

 Z L (x) := 2 a

λα+1   (1−λα+1 )  1 θ3 λα+1 1−λα+1 (q (ω)) dω , λα + λα+1

and

x

Z

(x − t)

P1 (x) :=

(ν−γ1 −1)p p−1

−1/(p−1)

(p (t))

dt,

(50)

(51)

a

T (x) := L (x)

p−1 P1 (x)( p ) Γ (ν − γ1 )

ω1 := 2(

p−1 p

!(λα +λα+1 )

)(λα +λα+1 ) ,

13

,

(52) (53)

54

ANASTASSIOU: FRACTIONAL INEQUALITIES

and Φ (x) := T (x) ω1 . Then

x

Z

(54)

h λα+1 λ γ1 +1 γ1 + f2 (ω) q (ω) |D∗a f1 (ω)| α D∗a

a

i γ +1 D∗a1 f1 (ω) λα+1 dω ≤

λα

γ1 |D∗a f2 (ω)| Z Φ (x)

a ν |D∗a f2

x

p

ν p (ω) (|D∗a f1 (ω)| +

 λα +λα+1  p (ω)| ) dω] , p

(55)

all a ≤ x ≤ b. Proof. As in Theorem 5 of [3], and Theorem 8 of [9].  We give Corollary 34. All here as in Theorem 30, with λβ = 0, p (t) = q (t) = 1. Then Z xh λα

λν

ν |D∗a f1 (ω)|

γ1 f1 (ω)| |D∗a

+

a λα

γ1 f2 (ω)| |D∗a

Z C1 (x)

x

λν

ν |D∗a f2 (ω)|

p

i

dω ≤ p

ν ν (|D∗a f1 (ω)| + |D∗a f2 (ω)| ) dω

(56)

ν ( λα +λ ) p

,

a

all a ≤ x ≤ b, where C1 (x) := A0 (x) |λβ =0 ( δ1 :=

21−(

λα +λν p



),



λν λα + λ ν

λν /p δ1 ,

(57)

if λα + λν ≤ p, if λα + λν ≥ p.

1,

(58)

We find that ( 

A0 (x) |λβ =0 =

λα p−λα (p − 1)( p )

!

λα p−λα (νp − γ1 p − 1)( p ) !) p−λν (p − λν )( p ) · p−λν (λα νp − λα γ1 p − λα + p − λν )( p )

λα

·

(Γ (ν − γ1 ))

λα νp−λα γ1 p−λα +p−λν ). p (x − a)(

Proof. As Corollary 1 of [3], and Corollary 10 of [9]. 14

(59) 

ANASTASSIOU: FRACTIONAL INEQUALITIES

55

Corollary 35. (All as in Theorem 30, λβ = 0, p (t) = q (t) = 1, λα = λν = 1, p = 2.) (j) In detail: Let ν ≥ γ1 + 1, γ1 ≥ 0, n := dνe , f1 , f2 ∈ AC n ([a, b]) : f1 (a) = (j) ν f2 (a) = 0, j = 0, 1, . . . , n − 1, a ≤ x ≤ b, D∗a fi ∈ L∞ (a, b) , i = 1, 2. Then Z x γ1 ν [|(D∗a f1 ) (ω)| |(D∗a f1 ) (ω)| + a γ1 ν |(D∗a f2 ) (ω)| |(D∗a f2 ) (ω)|] dω ≤

(60)

(ν−γ )

1 (x − a) √ √ 2Γ (ν − γ1 ) ν − γ1 2ν − 2γ1 − 1

x

Z

h

2

2

ν ν ((D∗a f1 ) (ω)) + ((D∗a f2 ) (ω))

i

!

 dω ,

a

all a ≤ x ≤ b. Corollary 36. (to Theorem 31; λα = 0, p (t) = q (t) = 1.) It holds Z xh λ λ γ2 ν |D∗a f2 (ω)| β |D∗a f1 (ω)| ν + a λβ

γ2 |D∗a f1 (ω)|

Z C2 (x)

x

λν

ν |D∗a f2 (ω)|

p

i

dω ≤ p

ν ν [|D∗a f1 (ω)| + |D∗a f2 (ω)| ] dω



(61)  λν +λβ  p

,

a

all a ≤ x ≤ b, where C2 (x) := (A0 (x) |λα =0 ) 2  δ3 :=



p−λν p

2λβ /λν − 1, 1,

λν λβ + λν

λν /p

if λβ ≥ λν , if λβ ≤ λν

λ /p

δ3 ν ,

(62)

 .

(63)

We find that   (A0 (x) |λα =0 ) =  

 λβ p−λβ  p (p − 1)



 λβ p−λβ   · p (νp − γ2 p − 1) !) p−λν (p − λν )( p ) · p−λν (λβ νp − λβ γ2 p − λβ + p − λν )( p )  λβ νp−λβ γ2 p−λβ +p−λν  p (x − a) . λβ

(Γ (ν − γ2 ))

15

(64)

56

ANASTASSIOU: FRACTIONAL INEQUALITIES

Corollary 37. (to Theorem 31; λα = 0, p (t) = q (t) = 1, λβ = λν = 1, p = 2.) In detail: (j) (j) Let ν ≥ γ2 +1, γ2 ≥ 0, n := dνe , f1 , f2 ∈ AC n ([a, b]) : f1 (a) = f2 (a) = 0, ν j = 0, 1, . . . , n − 1, a ≤ x ≤ b, D∗a fi ∈ L∞ (a, b) , i = 1, 2. Then Z x γ2 ν [|D∗a f2 (ω)| |D∗a f1 (ω)| + a γ2 ν |D∗a f1 (ω)| |D∗a f2 (ω)|] dω ≤

! (ν−γ2 ) (x − a) √ . √ √ 2Γ (ν − γ2 ) ν − γ2 2ν − 2γ2 − 1 Z x    2 2 ν ν (D∗a f1 (ω)) + (D∗a f2 (ω)) dω ,

(65)

a

all a ≤ x ≤ b. We continue with related results regarding k·k∞ . Theorem 38. Let ν ≥ γi + 1, γi ≥ 0, i = 1, 2, n := dνe and f1 , f2 ∈ (j) (j) AC n ([a, b]) such that f1 (a) = f2 (a) = 0, j = 0, 1, . . . , n − 1; a ≤ x ≤ b. ν fi ∈ L∞ (a, b) , Consider p (x) ≥ 0 and p (x) ∈ L∞ (a, b) , and assume D∗a i = 1, 2. Let λα , λβ , λν ≥ 0. Set (νλ −γ λ +νλ −γ λ +1)

T (x) :=

· Then

Z

x

2 β β (x − a) α 1 α (νλα − γ1 λα + νλβ − γ2 λβ + 1)

kp (s)k∞,(a,x) λα

(Γ (ν − γ1 + 1))

λβ

(Γ (ν − γ2 + 1))

.

(66)

h λ λ λ ν γ2 γ1 f2 (ω)| β |D∗a f1 (ω)| ν + f1 (ω)| α |D∗a p (ω) |D∗a

a λβ

γ2 |D∗a f1 (ω)|

λα

γ1 |D∗a f2 (ω)|

λν

ν |D∗a f2 (ω)|

i

dω ≤

T (x) h ν 2λβ 2(λα +λν ) ν kD∗a f1 k∞,(a,x) + kD∗a f1 k∞,(a,x) + 2 i 2λβ 2(λα +λν ) ν ν kD∗a f2 k∞,(a,x) + kD∗a f2 k∞,(a,x) ,

(67)

all a ≤ x ≤ b. Proof. Similar to Theorem 7 of [3], and Theorem 18 of [9].  We give Corollary 39. (to Theorem 38) Let ν ≥ γ1 + 2, γ1 ≥ 0, n := dνe , f1 , (j) (j) ν f2 ∈ AC n ([a, b]) such that f1 (a) = f2 (a) = 0, j = 0, 1, . . . , n − 1; D∗a fi ∈ L∞ (a, b) , i = 1, 2. Then

16

ANASTASSIOU: FRACTIONAL INEQUALITIES

57

x

Z

γ +1  γ1 1 |D∗a f1 (ω)| D∗a f2 (ω) +

a

γ +1 γ  1 D∗a1 f1 (ω) |D∗a f2 (ω)| dω 2(ν−γ1 )

(x − a)

≤ h

2

2 (Γ (ν − γ1 + 1))

i 2 2 ν ν f2 k∞,(a,x) , kD∗a f1 k∞,(a,x) + kD∗a

(68)

all a ≤ x ≤ b. Next we give converse results involving two functions. Theorem 40. Let γj ≥ 0, 1 ≤ ν − γj < p1 , 0 < p < 1, j = 1, 2; n := dνe , (l)

(l)

and f1 , f2 ∈ AC n ([a, b]) such that f1 (a) = f2 (a) = 0, l = 0, 1, . . . , n − 1, 1 a ≤ x ≤ b. Consider also p (t) > 0 and q (t) ≥ 0, with all p (t) , p(t) , q (t) , 1 ν q(t) ∈ L∞ (a, b) . Further assume D∗a fi ∈ L∞ (a, b) , i = 1, 2; each of which is of fixed sign a.e. on [a, b] . Let λν > 0 and λα , λβ ≥ 0 such that λν > p. Here Pk (ω) , A (ω) , A0 (x) are as in (39) , (40) , (41) , respectively. Set λα +λν (69) δ1 := 21−( p ) . If λβ = 0, then x

Z

h λ λ ν γ1 f1 (ω)| ν + q (ω) |D∗a f1 (ω)| α |D∗a

a λα

λν

ν |D∗a f2 (ω)|

γ1 f2 (ω)| |D∗a

A0 (x) |λβ =0 Z

x ν p (ω) [|D∗a f1



p



(ω)| +

λν λα + λ ν ν |D∗a f2

i

dω ≥

λν /p δ1 p

ν ( λα +λ ) p

(ω)| ] dω

.

(70)

a

Proof. Similar to Theorem 5 of [8] and Theorem 4 of [7].  We continue with Theorem 41. All here as in Theorem 40. Further assume λβ ≥ λν . Denote δ2 := 21−(λβ /λν ) , δ3 := (δ2 − 1) 2−(λβ /λν ) . If λα = 0, then Z

x

h λ λ γ2 ν q (ω) |D∗a f2 (ω)| β |D∗a f1 (ω)| ν +

a

17

(71)

58

ANASTASSIOU: FRACTIONAL INEQUALITIES

λβ

λν

γ2 |D∗a f1 (ω)|

p−λν p

(A0 (x) |λα =0 ) 2 x

Z

ν |D∗a f2 (ω)|



λν λβ + λν

i

dω ≥

λν /p

p

p

ν ν p (ω) [|D∗a f1 (ω)| + |D∗a f2 (ω)| ] dω

·

λ /p

δ3 ν 

 λν +λβ  p

,

(72)

a

all a ≤ x ≤ b. Proof. Similar to Theorem 6 of [8], and Theorem 5 of [7].  We give Theorem 42. Let ν ≥ 2 and γ1 ≥ 0 such that 2 ≤ ν − γ1 < p1 , 0 < p < 1, (l)

(l)

n := dνe . Consider f1 , f2 ∈ AC n ([a, b]) such that f1 (a) = f2 (a) = 0, ν l = 0, 1, . . . , n − 1, a ≤ x ≤ b. Assume that D∗a fi ∈ L∞ (a, b) , i = 1, 2; each of which is of fixed sign a.e. on [a, b] . Consider also p (t) > 0 and q (t) ≥ 0, with 1 1 , q (t) , q(t) ∈ L∞ (a, b) . Let λα ≥ λα+1 > 1. Denote all p (t) , p(t)   θ3 := 21−(λα /λα+1 ) − 1 2−λα /λα+1 ,

(73)

L (x) as in (50) , P1 (x) as in (51) , T (x) as in (52) , ω1 as in (53) , and Φ as in (54) . Then Z x h λα+1 λ γ1 +1 γ1 f1 (ω)| α D∗a f2 (ω) + q (ω) |D∗a a

i γ +1 D∗a1 f1 (ω) λα+1 dω ≥   Z x  λα +λp α+1 p p ν ν Φ (x) , p (ω) (|D∗a f1 (ω)| + |D∗a f2 (ω)| ) dω λα

γ1 |D∗a f2 (ω)|

(74)

a

all a ≤ x ≤ b. Proof. Similar to Theorem 7 of [8], and Theorem 7 of [7].  We have Corollary 43. (to Theorem 40; λβ = 0, p (t) = q (t) = 1). Then Z xh λ λ γ1 ν |D∗a f1 (ω)| α |D∗a f1 (ω)| ν + a λα

γ1 |D∗a f2 (ω)|

Z C1 (x)

x ν (|D∗a f1

λν

ν |D∗a f2 (ω)|

p

(ω)| +

ν |D∗a f2

a

18

i

dω ≥ p

(ω)| ) dω

(75)

ν ( λα +λ ) p

,

ANASTASSIOU: FRACTIONAL INEQUALITIES

59

all a ≤ x ≤ b, where C1 (x) := A0 (x) |λβ =0





δ1 := 21−(

λν λα + λ ν

λα +λν p

λν /p δ1 ,

(76)

).

(77)



Here A0 (x) |λβ =0 is given by (59) . Corollary 44. (to Theorem 41; λα = 0, p (t) = q (t) = 1, λβ ≥ λν ). Then Z xh λ λ γ2 ν |D∗a f2 (ω)| β |D∗a f1 (ω)| ν + a λβ

γ2 |D∗a f1 (ω)|

Z C2 (x)

x ν [|D∗a f1

λν

ν |D∗a f2 (ω)|

p

(ω)| +

ν |D∗a f2

i

dω ≥ 

p

(78)  λν +λβ  p

(ω)| ] dω

,

a

all a ≤ x ≤ b, where C2 (x) := (A0 (x) |λα =0 ) 2

p−λν p



λν λβ + λν

λν /p

λ /p

δ3 ν .

Here (A0 (x) |λα =0 ) is given by (64) .

3.3

Results involving several functions

We present Theorem 45. Here all notations, terms and assumptions are as in Theorem 30, but for fj ∈ AC n ([a, b]) , with j = 1, . . . , M ∈ N. Instead of δ1 there, we define here ( λα +λν M 1−( p ) , if λα + λν ≤ p, ∗ (79) δ1 := λα +λν 2( p )−1 , if λα + λν ≥ p. Call ϕ1 (x) := A0 (x) |λβ =0





λν λα + λ ν

λν /p .

(80)

If λβ = 0, then Z

x

 q (ω) 

a

M X

 λα

γ1 |D∗a fj (ω)|

λν 

ν |D∗a fj (ω)|



j=1

 Z ≤ δ1∗ ϕ1 (x)  a

x

ν   ( λα +λ ) p M X p ν |D∗a fj (ω)|  dω  , p (ω) 

j=1

19

(81)

60

ANASTASSIOU: FRACTIONAL INEQUALITIES

all a ≤ x ≤ b. Proof. As in Theorem 2 of [4], and Theorem 4 of [10]. We continue with Theorem 46. All here as in Theorem 45. Denote  λ /λ 2 β ν − 1, if λβ ≥ λν , δ3 := 1, if λβ ≤ λν , ( 1,  if λν + λβ ≥ p, λν +λβ ε2 := 1− p M if λν + λβ ≤ p, and ϕ2 (x) := (A0 (x) |λα =0 ) 2(

p−λν p

)



λν λβ + λν



(82) (83)

λν /p

λ /p

δ3 ν .

(84)

If λα = 0, then Z

 −1 h M X

x

q (ω) a



λβ

γ2 fj+1 (ω)| |D∗a

+

j=1 λβ

γ2 |D∗a fj (ω)|

h

λν

ν |D∗a fj (ω)|

λν

ν |D∗a fj+1 (ω)| λβ

γ2 fM (ω)| |D∗a

io

+

λν

ν |D∗a f1 (ω)|

+ io λ λ ν γ2 f1 (ω)| β |D∗a fM (ω)| ν dω |D∗a Z x  λν +λβ  p ≤2 ε2 ϕ2 (x) · p (ω) a

  β   λν +λ  p M  X p ν |D∗a fj (ω)|  dω · , 

(85)

j=1

all a ≤ x ≤ b. Proof. As Theorem 3 of [4], and Theorem 5 of [10]. We give the general case Theorem 47. All as in Theorem 45. Denote (  λ +λ  2

γ˜1 :=

α β λν

− 1,

1,



if λα + λβ ≥ λν , if λα + λβ ≤ λν ,

(86)

and ( γ˜2 :=

if λα + λβ + λν ≥ p,

 λα1,  +λβ +λν

1−

2

p

, 20

if λα + λβ + λν ≤ p.

(87)

ANASTASSIOU: FRACTIONAL INEQUALITIES

Set 

λν ϕ3 (x) := A0 (x) · (λα + λβ ) (λα + λβ + λν )  λ  ν λν p−λν p ) ( p p (˜ γ1 λβ ) · λα γ˜2 + 2 , and

( ε3 :=

M

61

 λpν

 λα1,+λβ +λν 

if λα + λβ + λν ≥ p,

p

if λα + λβ + λν ≤ p.

1−

(88)

(89)

Then

Z



M −1 h X

x

q (ω)  a

λα

γ1 |(D∗a fj ) (ω)|

λβ

γ2 |(D∗a fj+1 ) (ω)|

λν

ν |(D∗a fj ) (ω)|

j=1

i λ λ λ ν γ1 γ2 fj+1 ) (ω)| ν fj+1 ) (ω)| α |(D∗a fj ) (ω)| β |(D∗a + |(D∗a h λ λ λ γ2 ν γ1 fM ) (ω)| β |(D∗a f1 ) (ω)| ν + |(D∗a f1 ) (ω)| α |(D∗a ii λ λ λ ν γ2 γ1 fM ) (ω)| ν dω + |(D∗a f1 ) (ω)| β |(D∗a fM ) (ω)| α |(D∗a Z x  λα +λβ +λν  p ≤2 ε3 ϕ3 (x) · p (ω) a

    λα +λpβ +λν  M  X p ν · |(D∗a fj ) (ω)|  dω , 

(90)

j=1

all a ≤ x ≤ b. Proof. As Theorem 4 of [4], and Theorem 6 of [10].  We give Theorem 48. All here as in Theorem 33, but for fj ∈ AC n ([a, b]) , j = 1, . . . , M ∈ N. Also put ) ( 1, if λα + λα+1 ≥ p,   λα +λα+1 (91) ε4 := 1− p M if λα + λα+1 ≤ p. Then Z

x

q (ω) a

 −1 h M X 

λα

γ1 |(D∗a fj ) (ω)|

j=1

21

γ +1  D∗a1 fj+1 (ω) λα+1

62

ANASTASSIOU: FRACTIONAL INEQUALITIES

λα+1 io  λ γ1 γ1 +1 + |(D∗a fj+1 ) (ω)| α D∗a fj (ω) h λα+1  λ γ1 γ1 +1 + |(D∗a f1 ) (ω)| α D∗a fM (ω) λα+1 io  λ γ1 γ1 +1 + |(D∗a fM ) (ω)| α D∗a f1 (ω) dω Z x  λα +λα+1  p p (ω) ε4 Φ (x) · ≤2 a

     λα +λp α+1 M X p ν · , |(D∗a fj ) (ω)|  dω 

(92)

j=1

all a ≤ x ≤ b. Proof. As Theorem 5 of [4], and Theorem 7 of [10].  We continue with Corollary 49. (to Theorem 45, λβ = 0, p (t) = q (t) = 1, λα = λν = 1, p = 2). In detail: Let ν ≥ γ1 + 1, γ1 ≥ 0, n := dνe , fj ∈ AC n ([a, b]) , j = 1, . . . , M ∈ N; ν fj ∈ L∞ (a, b) , j = 1, . . . , M. a ≤ x ≤ b, and D∗a (l) Here fj (a) = 0, l = 0, 1, . . . , n − 1; j = 1, . . . , M. Then   Z x X M ν γ1  fj (ω)| |D∗a fj (ω)| dω ≤ |D∗a a

j=1 ν−γ

(x − a) 1 √ √ 2Γ (ν − γ1 ) ν − γ1 2ν − 2γ1 − 1     M  Z x X 2 ν  (D∗a fj (ω))  dω ,  a 

! .

(93)

j=1

all a ≤ x ≤ b. Corollary 50. (to Theorem 46, λα = 0, p (t) = q (t) = 1, λβ = λν = 1, p = 2). In detail: ν Let ν ≥ γ2 + 1, γ2 ≥ 0, n := dνe , fj ∈ AC n ([a, b]) , D∗a fj ∈ L∞ (a, b) , j = 1, . . . , M ∈ N; a ≤ x ≤ b. (l) Here fj (a) = 0, l = 0, 1, . . . , n − 1; j = 1, . . . , M. Then  Z x M −1 X γ2 ν [|(D∗a fj+1 ) (ω)| |(D∗a fj ) (ω)|   a j=1

22

ANASTASSIOU: FRACTIONAL INEQUALITIES

63

γ2 ν + |(D∗a fj ) (ω)| |(D∗a fj+1 ) (ω)|]} γ2 ν + [|(D∗a fM ) (ω)| |(D∗a f1 ) (ω)| γ2 ν + |(D∗a f1 ) (ω)| |(D∗a fM ) (ω)|]} dω



(ν−γ )

2 2 (x − a) √ ≤ √ Γ (ν − γ2 ) ν − γ2 2ν − 2γ2 − 1     M Z x X  2 ν  ((D∗a fj ) (ω))  dω ,  a 

! (94)

j=1

all a ≤ x ≤ b. Corollary 51. (to Theorem 48, λα = 1, λα+1 = 1/2, p = 3/2, p (t) = q (t) = 1). In detail: Let ν ≥ γ1 + 2, γ1 ≥ 0, n := dνe , and fj ∈ AC n ([a, b]) , j = 1, . . . , M ∈ N, (l) ν such that fj (a) = 0, l = 0, 1, . . . , n − 1, a ≤ x ≤ b. Assume also D∗a fj ∈ L∞ (a, b) , j = 1, . . . , M. Set 3ν−3γ1 −1   ) 2 (x − a)( 2 ∗ , (95) Φ (x) := √ 3/2 3ν − 3γ1 − 2 (Γ (ν − γ1 )) all a ≤ x ≤ b. Then  " Z x M −1 X a



γ1 fj ) (ω)| |(D∗a

r   γ1 +1 D∗a fj+1 (ω)

j=1

#) r   γ +1 γ1 + |(D∗a fj+1 ) (ω)| D∗a1 fj (ω) " +

γ1 f1 ) (ω)| |(D∗a

γ1 fM ) (ω)| + |(D∗a

 Z ≤ 2Φ∗ (x) ·  a

x



r   γ1 +1 D∗a fM (ω)

#) r   γ1 +1 dω D∗a f1 (ω) M X



 3/2 

ν |(D∗a fj ) (ω)|

 dω  ,

(96)

j=1

all a ≤ x ≤ b. We continue with results regarding k·k∞ . Theorem 52. All as in Theorem 38 but for fj ∈ AC n ([a, b]) , j = 1, . . . , M ∈ N. Then

23

64

ANASTASSIOU: FRACTIONAL INEQUALITIES

Z

x

p (ω) a

 −1 h M X 

λα

γ1 |(D∗a fj ) (ω)|

λβ

γ2 |(D∗a fj+1 ) (ω)|

λν

ν |(D∗a fj ) (ω)|

j=1

io λ λ λ γ2 γ1 ν + |(D∗a fj ) (ω)| β |(D∗a fj+1 ) (ω)| α |(D∗a fj+1 ) (ω)| ν h λ λ λ γ1 γ2 ν + |(D∗a f1 ) (ω)| α |(D∗a fM ) (ω)| β |(D∗a f1 ) (ω)| ν io λ λ λ γ2 γ1 ν + |(D∗a f1 ) (ω)| β |(D∗a fM ) (ω)| α |(D∗a fM ) (ω)| ν dω   M n X o 2λβ 2(λα +λν ) ν ν ≤ T (x) kD∗a fj k∞,(a,x) + kD∗a fj k∞,(a,x) ,  

(97)

j=1

all a ≤ x ≤ b. Proof. Based on Theorem 38.  We give Corollary 53. (to Theorem 52) In detail: Let ν ≥ γ1 + 2, γ1 ≥ 0, n := dνe and fj ∈ AC n ([a, b]) , j = 1, . . . , M ∈ N, (l) such that fj (a) = 0, l = 0, 1, . . . , n − 1; j = 1, . . . , M ; a ≤ x ≤ b. Further ν fj ∈ L∞ (a, b) , j = 1, . . . , M. Then suppose that D∗a  Z x M −1 X γ +1   γ1 1 fj+1 (ω) |(D∗a fj ) (ω)| D∗a a  j=1

γ +1   γ1 1 fj+1 ) (ω)| + D∗a fj (ω) |(D∗a γ +1   γ1 1 fM (ω) + |(D∗a f1 ) (ω)| D∗a γ +1   γ1 1 + D∗a f1 (ω) |(D∗a fM ) (ω)| dω  ! M 2(ν−γ1 ) X (x − a) 2 ν  ≤ kD∗a fj k∞  , 2 (Γ (ν − γ1 + 1)) j=1

(98)

all a ≤ x ≤ b. We continue with converse results. Theorem 54. Let γj ≥ 0, 1 ≤ ν − γj < p1 , 0 < p < 1, j = 1, 2; n := dνe , and (l)

fi ∈ AC n ([a, b]) , i = 1, . . . , M ∈ N, such that fi (a) = 0, l = 0, 1, . . . , n − 1; i = 1, . . . , M ; a ≤ x ≤ b. Consider also p (t) > 0 and q (t) ≥ 0, with all p (t) , 1 1 ν p(t) , q (t) , q(t) ∈ L∞ (a, b) . Further assume D∗a fi ∈ L∞ (a, b) , i = 1, . . . , M ; each of which is of fixed sign a.e. on [a, b] . Let λν > 0 and λα , λβ ≥ 0 such that λν > p.

24

ANASTASSIOU: FRACTIONAL INEQUALITIES

65

Here Pk (ω) , k = 1, 2, A (ω) , A0 (x) are as in (39) , (40) , (41) , respectively. Call λν /p   λν ϕ1 (x) := A0 (x) |λβ =0 , (99) λα + λ ν λα +λν p

δ1∗ := M 1−(

).

(100)

If λβ = 0, then Z

x

 q (ω) 

a

M X

 λα

γ1 |D∗a fj (ω)|

λ ν |D∗a fj (ω)| ν  dω

(101)

j=1

 Z ≥ δ1∗ ϕ1 (x) 

x

ν   ( λα +λ ) p M X p ν p (ω)  , |D∗a fj (ω)|  dω 

a

j=1

all a ≤ x ≤ b. Proof. As Theorem 11 of [7], and Theorem 11 of [8].  We continue with Theorem 55. All as in Theorem 54. Assume λβ ≥ λν . Denote ϕ2 (x) := (A0 (x) |λα =0

p−λν ) 2( p )



λν λβ + λν

λν /p

λ /p

δ3 ν ,

(102)

where δ3 is as in (71) . If λα = 0, then  Z x −1 h M X λ λ ν γ2 fj+1 ) (ω)| β |(D∗a fj ) (ω)| ν q (ω) |(D∗a  a j=1

io λ λ ν γ2 fj ) (ω)| β |(D∗a fj+1 ) (ω)| ν + |(D∗a h λ λ γ2 ν + |(D∗a fM ) (ω)| β |(D∗a f1 ) (ω)| ν io λ λ γ2 ν f1 ) (ω)| β |(D∗a fM ) (ω)| ν dω ≥ + |(D∗a Z x  λν +λβ   λν +λβ  1− p p M 2 ϕ2 (x) · p (ω) a

  M X p ν  |(D∗a fj ) (ω)|  dω

  β  λν +λ p 

,

(103)



j=1

all a ≤ x ≤ b. Proof. As Theorem 12 of [7], and Theorem 12 of [8]. 25



66

ANASTASSIOU: FRACTIONAL INEQUALITIES

We give Theorem 56. Let ν ≥ 2 and γ1 ≥ 0 such that 2 ≤ ν − γ1 < p1 , 0 < p < 1, (l)

n := dνe . Consider fi ∈ AC n ([a, b]) , i = 1, . . . , M ∈ N, such that fi (a) = 0, ν l = 0, 1, . . . , n − 1; i = 1, . . . , M ; a ≤ x ≤ b. Assume that D∗a fi ∈ L∞ (a, b) , i = 1, . . . , M ; each of which is of fixed sign a.e. on [a, b] . Consider also p (t) > 0 1 1 and q (t) ≥ 0, with all p (t) , p(t) , q (t) , q(t) ∈ L∞ (a, b) . Let λα ≥ λα+1 > 1. Here θ3 is as in (73) , L (x) as in (50) , P1 (x) as in (51) , T (x) as in (52) , ω1 as in (53) , and Φ as in (54) .  Z x −1 h M X λα+1 λ γ1 +1 γ1 q (ω) |(D∗a fj ) (ω)| α D∗a fj+1 (ω)  a j=1

λα+1 io λ γ1 +1 γ1 fj (ω) + |(D∗a fj+1 ) (ω)| α D∗a h λα+1 λ γ1 +1 γ1 + |(D∗a f1 ) (ω)| α D∗a fM (ω) λα+1 io λ γ1 +1 γ1 dω ≥ f1 (ω) fM ) (ω)| α D∗a + |(D∗a  λα +λα+1   λα +λα+1  1− p p M 2 Φ (x) ·     λα +λp α+1   Z x M X p ν  |(D∗a fj ) (ω)|  dω  , p (ω)  a

(104)

j=1

all a ≤ x ≤ b. Proof. As Theorem 13 of [7] and Theorem 13 of [8].  We have the special cases. Corollary 57. (to Theorem 54, λβ = 0, p (t) = q (t) = 1). Then   Z x X M λ λ ν γ1  fj (ω)| α |D∗a fj (ω)| ν  dω |D∗a a

j=1

 Z ≥ δ1∗ ϕ1 (x)  a

x

ν   ( λα +λ ) p M X p ν  |D∗a fj (ω)|  dω  ,

j=1

all a ≤ x ≤ b.  In (105) , A0 (x) |λβ =0 of ϕ1 (x) is given by (59) . Corollary 58. (to Theorem 55, λα = 0, p (t) = q (t) = 1). It holds  Z x M −1 h X λ λ γ2 ν |(D∗a fj ) (ω)| ν fj+1 ) (ω)| β |(D∗a a  j=1

26

(105)

ANASTASSIOU: FRACTIONAL INEQUALITIES

67

io λ λ γ2 ν + |(D∗a fj ) (ω)| β |(D∗a fj+1 ) (ω)| ν h λ λ γ2 ν + |(D∗a fM ) (ω)| β |(D∗a f1 ) (ω)| ν io λ λ γ2 ν + |(D∗a f1 ) (ω)| β |(D∗a fM ) (ω)| ν dω ≥   λν +λβ    λν +λβ  1− p p M 2 ϕ2 (x) ·   β    λν +λ p M  x X p ν  , |(D∗a fj ) (ω)|  dω  a 

 Z

(106)

j=1

all a ≤ x ≤ b. In (106) , (A0 (x) |λα =0 ) of ϕ2 (x) is given by (64) . Next we apply above results on the spherical shell.

4

Background-II

Here initially we follow [24], pp. 149-150 and [25], pp. 87-88. Let us denote by dx ≡ λRN (dx) , N ∈ N, the Lebesgue measure on RN , and S N −1 :=  x ∈ RN : |x| = 1 the unit sphere on RN , where |·| stands for the Euclidean norm in RN . Also denote the ball  B (0, R) := x ∈ RN : |x| < R ⊆ RN , R > 0, and the spherical shell A := B (0, R2 ) − B (0, R1 ), 0 < R1 < R2 . For x ∈ RN − {0} we can write uniquely x = rω, where r = |x| > 0, and ω = xr ∈ S N −1 , |ω| = 1. Clearly here RN − {0} = (0, ∞) × S N −1 , and the map Φ : RN − {0} → S N −1 : Φ (x) =

x |x|

is continuous. Also A = [R1 , R2 ] × S N −1 . Let us denote by dω ≡ λS N −1 (ω) the surface measure on S N −1 to be defined as the image under Φ of N · λRN restricted to the Borel class of B (0, 1)−{0} . More precisely the last definition has as follows: let A ⊂ S N −1 be a Borel set, and let

27

68

ANASTASSIOU: FRACTIONAL INEQUALITIES

e := {ru : 0 < r < 1, u ∈ A} ⊂ RN , A we define

  e . λS N −1 (A) = N · λRN A

BX here stands for the Borel class on space X. We denote by Z  2π N/2 N −1 ωN ≡ λS N −1 S = dω = Γ (N/2) S N −1 the surface area of S N −1 and we get the volume |B (0, r)| =

ωN r N 2π N/2 rN = , N N Γ (N/2)

so that |B (0, 1)| =

2π N/2 . N Γ (N/2)

Clearly here   ωN R2N − R1N 2π N/2 R2N − R1N V ol (A) = |A| = = . N N Γ (N/2) Next, define ψ : (0, ∞) × S N −1 → RN − {0} by ψ (r, ω) := rω, ψ is one to one and onto function, thus (r, ω) ≡ ψ −1 (x) = (|x| , Φ (x)) are called the polar coordinates of x ∈ RN − {0} .  Finally, define the measure RN on (0, ∞) , B(0,∞) by Z RN (Γ) = rN −1 dr, any Γ ∈ B(0,∞) . Γ

We mention the very important theorem Theorem 59. (see exercise 6, pp. 149-150 in [24] and Theorem 5.2.2 pp. 87-88 of [25]) We have that λRN = (RN × λS N −1 ) ◦ ψ −1 on BRN −{0} .  In particular, if f is a non-negative Borel measurable function on RN , BRN , then the Lebesgue integral Z  Z Z f (x) dx = rN −1 f (rω) λS N −1 (dω) dr RN

S N −1

(0,∞)

Z

!

Z

=

f (rω) r S N −1

(0,∞)

28

N −1

dr λS N −1 (dω) .

(107)

ANASTASSIOU: FRACTIONAL INEQUALITIES

69

Clearly (107) is true for f a Borel integrable function taking values in R. Based on Theorem 59 in [5] we proved the next result which is the main tool of this section. Proposition 60. Let f : A → R, be a Lebesgue integrable function, where A := B (0, R2 ) − B (0, R1 ), 0 < R1 < R2 . Then Z

Z

Z

!

R2

f (x) dx = A

f (rω) r S N −1

(108)

N −1

dr dω.

R1

We make Remark 61. Let F : A = [R1 , R2 ] × S N −1 → R and for each ω ∈ S N −1 define gω (r) := F (rω) = F (x) , where x ∈ A, with A := B (0, R2 ) − B (0, R1 ); 0 < R1 ≤ r ≤ R2 , r = |x| , ω = xr ∈ S N −1 . For each ω ∈ S N −1 we assume that gω ∈ AC n ([R1 , R2 ]) , where n := dνe , ν ≥ 0. Thus, by Corollary 8, for almost all r ∈ [R1 , R2 ] , there exists the Caputo fractional derivative Z r 1 n−ν−1 (n) ν D∗R g (r) = (r − t) gω (t) dt, (109) 1 ω Γ (n − ν) R1 for all ω ∈ S N −1 . Now we are ready to give Definition 62. Let F : A → R, ν ≥ 0, n := dνe such that F (·ω) ∈ AC n ([R1 , R2 ]) , for all ω ∈ S N −1 . We call the Caputo radial fractional derivative the following function Z r ν n ∂∗R F (x) 1 n−ν−1 ∂ F (tω) 1 := dt, (110) (r − t) ∂rν Γ (n − ν) R1 ∂rn where x ∈ A, i.e. x = rω, r ∈ [R1 , R2 ] , ω ∈ S N −1 . Clearly 0 ∂∗R F (x) 1 = F (x) , ∂r0 ν ∂∗R F (x) ∂ ν F (x) 1 = , if ν ∈ N. ∂rν ∂rν Above function (110) exists almost every where for x ∈ A. We justify this next.

29

70

ANASTASSIOU: FRACTIONAL INEQUALITIES

Note 63. Call Λ1 :=

  ∂ ν F (x) r ∈ [R1 , R2 ] : ∗R1 ν does not exist . ∂r

(111)

We have that Lebesgue measure λR (Λ1 ) = 0. Call ΛN := Λ1 × S N −1 . So there exists a Borel set Λ∗1 ⊂ [R1 , R2 ] , such that Λ1 ⊂ Λ∗1 , λR (Λ∗1 ) = λR (Λ1 ) = 0, thus RN (Λ∗1 ) = 0. Consider now Λ∗N := Λ∗1 ×S N −1 ⊂ A, which is a Borel set of RN −{0} . Clearly then by Theorem 59, λRN (Λ∗N ) = 0, but ΛN ⊂ Λ∗N , implying λRN (ΛN ) = 0. Consequently (110) exists a.e. in x with respect to λRN on A. We give the following fundamental representation result. Theorem 64. Let ν ≥ γ + 1, γ ≥ 0, n := dνe , F : A → R with F ∈ L1 (A) . Assume that F (·ω) ∈ AC n ([R1 , R2 ]) for all ω ∈ S N −1 , and that L∞ (R1 , R2 ) for all ω ∈ S N −1 . ∂ν

ν ∂∗R F (·ω) 1 ∂r ν



F (x)

1 Further assume that ∗R∂r ∈ L∞ (A) . More precisely, for these r ∈ ν N −1 ν [R1 , R2 ] , for each ω ∈ S , for which D∗R F (rω) takes real values, there 1 ν exists M1 > 0 such that D∗R1 F (rω) ≤ M1 .

We suppose that Then

∂ i F (R1 ω) ∂r j

= 0, j = 0, 1, . . . , n − 1, for every ω ∈ S N −1 .

γ ∂∗R F (x) γ 1 = D∗R F (rω) = 1 ∂rγ Z r  1 ν−γ−1 ν (r − t) D∗R F (tω) dt, 1 Γ (ν − γ) R1

true ∀x ∈ A, i.e. true ∀r ∈ [R1 , R2 ] and ∀ω ∈ S N −1 , γ > 0. Here γ D∗R F (·ω) ∈ AC ([R1 , R2 ]) , 1

(112)

(113)

N −1

∀ω ∈ S , γ > 0. Furthermore

γ ∂∗R F (x) 1 ∈ L∞ (A) , γ > 0. ∂rγ

(114)

In particular, it holds 1 F (x) = F (rω) = Γ (ν)

Z

r

ν−1

(r − t) R1

 ν D∗R F (tω) dt, 1

(115)

true ∀x ∈ A, i.e. true ∀r ∈ [R1 , R2 ] and ∀ω ∈ S N −1 , and F (·ω) ∈ AC ([R1 , R2 ]) , ∀ω ∈ S N −1 .

(116)

Proof. By our assumptions and Theorem 16, Corollary 14, we have valid (112) and (115) . Also (113) is clear, see [5]. Property (116) is easy to prove. Fixing r ∈ [R1 , R2 ] , the function ν−γ−1

δr (t, ω) := (r − t)

30

ν D∗R F (tω) 1

ANASTASSIOU: FRACTIONAL INEQUALITIES

is measurable on



71

 [R1 , r] × S N −1 , B [R1 ,r] × B S N −1 .

Here B [R1 ,r] × B S N −1 stands for the complete σ−algebra generated by B [R1 ,r] × B S N −1 , where B X stands for the completion of BX . Then we get that  Z r Z |δr (t, ω)| dt dω = S N −1

Z

Z

R1

r

ν−γ−1

(r − t) S N −1

R1

 ν D∗R F (tω) dt dω ≤ 1

ν Z   Z r

∂∗R1 F (x) ν−γ−1

(r − t) dt dω

∂rν S N −1 R1 ∞,([R1 ,r]×S N −1 )

ν

  ν−γ

∂∗R1 F (x) 2π N/2 (r − R1 )

= ≤

∂rν Γ (N/2) (ν − γ) ∞,([R1 ,r]×S N −1 )

ν

  ν−γ

∂∗R1 F (x) (R2 − R1 ) 2π N/2

< ∞.

∂rν

Γ (N/2) (ν − γ) ∞,A

(117) (118)

(119)

Hence δr (t, ω) is integrable on   [R1 , r] × S N −1 , B [R1 ,r] × B S N −1 . γ Consequently, by Fubini’s theorem and (112) , we obtain that D∗R F (rω) , 1  N −1 ν ≥ γ + 1, γ > 0 is integrable in ω over S , B S N −1 . So we have that γ D∗R F (rω) is continuous in r ∈ [R , R ] , ∀ω ∈ S N −1 , and measurable in 1 2 1 N −1 ω∈S , ∀r ∈ [R1 , R2 ] . So, it is a Carath´eodory function. Here [R1 , R2 ] is a separable metric space and S N −1 is a measurable space, and the function takes values in R∗ := R ∪ {±∞} , which is a metric space. Therefore  by Theorem γ N −1 − measurable 20.15, p. 156 of [1], D∗R F (rω) is jointly B × B [R1 ,R2 ] S 1 on [R1 , R2 ] × S N −1 = A, that is Lebesgue measurable on A. Indeed then we have that γ 1 D F (rω) ≤ ∗R1 Γ (ν − γ) Z r ν−γ−1 ν (r − t) D∗R1 F (tω) dt (120) R1



ν

D F (·ω) ∗R1 ∞,[R

1 ,R2 ]

Z

Γ (ν − γ) ν−γ

M1 (r − R1 ) Γ (ν − γ) (ν − γ)



r

ν−γ−1

(r − t)

 dt ≤

R1

M1 ν−γ (R2 − R1 ) := τ < ∞, Γ (ν − γ − 1)

for all ω ∈ S N −1 and for all r ∈ [R1 , R2 ] . 31

(121)

72

ANASTASSIOU: FRACTIONAL INEQUALITIES

I.e. we proved that γ D F (rω) ≤ τ < ∞, ∀ω ∈ S N −1 , and ∀r ∈ [R1 , R2 ] . ∗R1 Hence proving

γ ∂∗R F (x) 1 ∂r γ

(122)

∈ L∞ (A) , γ > 0. We have completed our proof.



5

Main results on a spherical shell

5.1

Results involving one function

We give Theorem 65. Let ν ≥ γi + 1, γi ≥ 0, i = 1, . . . , l ∈ N, n := dνe , and 0 ≤ 64. Let ri > 0 : Pl γ1 < γ2 ≤ γ3 ≤ . . . ≤ γl . Here f : A → R is as0 in Theorem 1 1 r = p. If γ = 0 we set r = 1. Let s , s > 1 : + i 1 1 1 1 i=1 s1 s0 = 1, and s2 , s02 > 1 :

1 s2

+

1 s02

1

= 1, and p > s2 , N ≥ 2. Denote (N −1)s01 +1

R2

Q1 (R2 ) := 

(1−N )  R2

Q2 (R2 ) := 

(N −1)s01 +1

− R1 (N − 1) s01 + 1 s02 p

+1

(1 −

and σ :=

(1−N ) − R1 s0 N ) p2 + 1

!1/s01 ,

s02 p

+1

(123)

p/s02  

,

p − s2 . ps2

(124)

(125)

Also call C := Q1 (R2 ) Q2 (R2 ) l  Y

 σ ri σ r σ r (Γ (ν − γi )) i (ν − γi − 1 + σ) i i=1 Pl  (ν−γi −1)ri + sp + s1 −1 2 1 (R2 − R1 ) i=1 P    1/s1 . l p (ν − γ − 1) r s + s + 1 − 1 i i 1 1 i=1 s2 Then

(126)

Z Y l γi ∂∗R1 f (x) ri ∂rγi dx ≤ A i=1

Z ν ∂∗R1 f (x) p C ∂rν dx. A 32

(127)

ANASTASSIOU: FRACTIONAL INEQUALITIES

73

Proof. Clearly here f (·ω) fulfills all the assumptions of Theorem 24, ∀ω ∈ S N −1 . We set there q1 (r) = q2 (r) := rN −1 , r ∈ [R1 , R2 ] . Hence by (30) we have Z

R2

r

N −1

R1

l γi Y ∂∗R1 f (rω) ri dr ≤ ∂rγi

i=1

ν Z R2 ∂ f (rω) p N −1 ∗R1 dr, ∀ω ∈ S N −1 . C r ∂rν R1

(128)

Therefore it holds Z

R2

Z

r S N −1

N −1

R1

Z

i=1

Z

R2

C S N −1

! l γi Y ∂∗R1 f (rω) ri dr dω ≤ ∂rγi

R1

ν p ! ∂ f (rω) ∗R 1 dr dω. rN −1 ∂rν

(129)

Using conclusion of Theorem 64 and Proposition 60 we derive (127) .  We continue with the following extreme case. Theorem 66. Let ν ≥ γi + 1, γi ≥ 0, i = 1, . . . , l ∈ N, n := dνe , and 0 ≤ Pl γ1 < γ2 ≤ γ3 ≤ . . . ≤ γl . Here f : A → R as in Theorem 64. Let ri > 0 : i=1 ri = r. If γ1 = 0 we set r1 = 1, N ≥ 2. Call P rν− li=1 ri γi +1 N −1 R (R − R ) 2 1 2 f :=   > 0. (130) M Ql Pl ri (Γ (ν − γ + 1)) rν − r γ + 1 i i=1 i=1 i i Then Z A

! l γi Y ∂∗R1 f (x) ri dx ≤ ∂rγi

i=1

N/2 fM1r 2π M . Γ (N/2)

(131)

Proof. Clearly here f (·ω) fulfills all the assumptions of Theorem 26, ∀ω ∈ S N −1 . We set qe (r) = rN −1 , r ∈ [R1 , R2 ] . Hence by (35) we have Z

R2

R1

rN −1

l Y γi D f (rω) ri dr ≤ ∗R1 i=1

33

74

ANASTASSIOU: FRACTIONAL INEQUALITIES

 P

ν

rν− li=1 ri γi +1 N −1

∂∗R1 f (·ω) R (R − R ) 2 1 2

 Q   Pl

l ri ∂rν ∞,[R1 ,R2 ] rν − r γ + 1 (Γ (ν − γ + 1)) i i=1 i i i=1 

fM1r , ∀ω ∈ S N −1 . ≤M Hence Z

Z

S N −1

R2

R1

(132)

! l Y r i γ D i f (rω) dr dω ≤ rN −1 ∗R1 i=1 N/2 fM r 2π M . 1 Γ (N/2)

(133)

Using the conclusion of Theorem 64 and Proposition 60 we derive (131) . 

5.2

Results involving two function

We give We need to make Assumption 67. Let ν ≥ γi + 1, γi ≥ 0, i = 1, 2, n := dνe , f1 , f2 : A → R with f1 , f2 ∈ L1 (A) , where A := B (0, R2 ) − B (0, R1 ), 0 < R1 < R2 . Assume

ν ∂∗R fi (·ω) 1 ∂r ν ν ∂ 1 fi (x) ∈ that ∗R∂r ν N −1

that f1 (·ω) , f2 (·ω) ∈ AC n ([R1 , R2 ]) for all ω ∈ S N −1 , and that ∈ L∞ (R1 , R2 ) , for all ω ∈ S N −1 ; i = 1, 2. Further assume L∞ (A) , i = 1, 2. More precisely, for these r ∈ [R1 , R2 ] , ∀ω ∈ S ν f (rω) takes real values, there exists Mi > 0 such that D∗R 1 i ν D∗R fi (rω) ≤ Mi , for i = 1, 2.

, for which

(134)

1

We suppose that ∂ j fi (R1 ω) = 0, j = 0, 1, . . . , n − 1, ∂rj ∀ω ∈ S N −1 ; i = 1, 2. Let λν > 0, and λα , λβ ≥ 0, such that λν < p, where p > 1. If γ1 = 0 we set λα = 1 and if γ2 = 0 we set λβ = 1, here N ≥ 2. Assumption 67* . (continuation of Assumption 67) Set Z w (ν−γk −1)p/(p−1) ( 1−N Pk (w) := (w − t) t p−1 ) dt, (135) R1

k = 1, 2, R1 ≤ w ≤ R2 , A (w) :=

w(N −1)(1−

λν p

λ p−1 ) (P (w))λα ( p−1 p ) (P2 (w)) β ( p ) 1 λα

(Γ (ν − γ1 ))

34

λβ

(Γ (ν − γ2 ))

,

(136)

ANASTASSIOU: FRACTIONAL INEQUALITIES

Z

ν) ! (p−λ p

R2

p/(p−λν )

(A (w))

A0 (R2 ) :=

75

dw

.

(137)

R1

We present Theorem 68. All here as in Assumption 67, 67 * , especially assume λα > 0, λβ = 0 and p = λα + λν > 1. Then ν λ Z " γ1 ∂∗R1 f1 (x) λα ∂∗R f (x) ν 1 1 + ∂rγ1 ∂rν A

γ1 ν λ # ∂∗R1 f2 (x) λα ∂∗R f (x) ν 1 2 dx ≤ ∂rγ1 ∂rν

(138)

 λν λν ( p ) A0 (R2 ) |λβ =0 p p ν  Z  ν ∂∗R1 f1 (x) ∂ f (x) p + ∗R1 2 dx. ∂rν ∂rν 



A

Proof. We apply here Theorem 30 for every ω ∈ S N −1 , here p (r) = q (r) = r , r ∈ [R1 , R2 ] . Use of Theorem 64 and Proposition 60. So proof is similar to the proof of Theorem 65.  It follows the counterpart of the last theorem. Theorem 69. All here as in Assumption 67, 67 * , especially suppose λα = 0, λβ > 0, p = λν + λβ > 1. Denote  λ /λ 2 β ν − 1, if λβ ≥ λν , (139) δ3 := 1, if λβ ≤ λν . N −1

Then

ν λ Z " γ2 ∂∗R1 f2 (x) λβ ∂∗R f (x) ν 1 1 + ∂rγ2 ∂rν A ν γ2 λν # ∂∗R1 f1 (x) λβ ∂∗R f (x) 2 1 dx ≤ ∂rγ2 ∂rν

(140)

 λν λν ( p ) λν /p (A0 (R2 ) |λα =0 ) 2 δ3 p ν p  Z  ν ∂∗R1 f1 (x) p ∂∗R f (x) 1 2 + dx. ∂rν ∂rν A λβ /p



Proof. Based on Theorem 31, similar to the proof of Theorem 68.

35



76

ANASTASSIOU: FRACTIONAL INEQUALITIES

Theorem 70. All here as in Assumption 67, 67 * , especially suppose λν , λα , λβ > 0, p = λα + λβ + λν > 1. Denote (  λα +λβ  2 λν − 1, if λα + λβ ≥ λν , γ e1 := (141) 1, if λα + λβ ≤ λν . Then

γ2 λ ν λ Z " γ1 ∂∗R1 f1 (x) λα ∂∗R f (x) β ∂∗R f (x) ν 1 2 1 1 + ∂rγ1 ∂rγ2 ∂rν A γ2 γ1 λ ν λ # ∂∗R1 f1 (x) λβ ∂∗R f (x) α ∂∗R f (x) ν 1 2 1 2 dx ≤ ∂rγ2 ∂rγ1 ∂rν  (λν /p) h i λν λ /p A0 (R2 ) λλαν /p + 2(λα +λβ )/p (e γ1 λβ ) ν (λα + λβ ) p ν p  Z  ν ∂∗R1 f1 (x) p ∂∗R f (x) 1 2 + dx. ∂rν ∂rν A

(142)

Proof. Based on Theorem 32, similar to the proof of Theorem 68.  We give the next special important case Theorem 71. All as in Assumption 67 without λν there. Here γ2 = γ1 + 1, λα ≥ 0, λβ := λα+1 ∈ (0, 1) , and p = λα + λα+1 > 1. Denote  (λ /λ ) 2 α α+1 − 1 if λα ≥ λα+1 θ3 := (143) 1, if λα ≤ λα+1 ,  (1 − λα+1 ) L (R2 ) := 2 (N − λα+1 ) !#(1−λα+1 )  λ N −λα+1 N −λα+1 θ3 λα+1 α+1 1−λα+1 1−λα+1 R2 − R1 , (144) p and Z

R2

1−N

t( p−1 ) dt,

(145)

2p−1 .

(146)

(ν−γ1 −1)p/(p−1)

(R2 − t)

P1 (R2 ) := R1

(p−1)

Φ (R2 ) := L (R2 ) Then Z A

P1 (R2 ) p (Γ (ν − γ1 ))

!

 λα+1 γ1 γ1 +1 ∂∗R1 f1 (x) λα ∂∗R f (x) 2 1  + ∂rγ1 ∂rγ1 +1

λα+1  γ1 +1 γ1 ∂∗R1 f2 (x) λα ∂∗R f (x) 1 1  dx ∂rγ1 ∂rγ1 +1 36

ANASTASSIOU: FRACTIONAL INEQUALITIES

77

ν p  Z  ν ∂∗R1 f1 (x) p ∂∗R f (x) 1 2 dx. ≤ Φ (R2 ) + ∂rν ∂rν A

(147)

Proof. Based on Theorem 33, similar to the proof of Theorem 68.  We give an L∞ result on the shell. We need to make Assumption 72. Let ν ≥ γi + 1, γi ≥ 0, i = 1, 2, n := dνe , f1 , f2 : A → R with f1 , f2 ∈ L1 (A) , where RN ⊇ A := B (0, R2 ) − B (0, R1 ), 0 < R1 < R2 , N ≥ 2. Assume that f1 (·ω) , f2 (·ω) ∈ AC n ([R1 , R2 ]) for all ω ∈ S N −1 , and

ν ∂∗R fi (·ω) 1 ∈ L∞ (R1 , R2 ) , for all ω ∈ S N −1 ; i = 1, 2. Further assume that ∂r ν ν ∂∗R1 fi (x) ∈ L∞ (A) , i = 1, 2. More precisely, for these r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 , ∂r ν ν for which D∗R f (rω) takes real values, there exists Mi > 0 such that 1 i

that

ν D∗R fi (rω) ≤ Mi , for i = 1, 2. 1

(148)

We suppose that ∂ j fi (R1 ω) = 0, j = 0, 1, . . . , n − 1, ∂rj ∀ω ∈ S N −1 ; i = 1, 2. Let λν , λα , λβ ≥ 0. If γ1 = 0 we set λα = 1 and if γ2 = 0 we set λβ = 1. We present Theorem 73. All as in Assumption 72. Set R2N −1 T (R2 − R1 ) := · (νλα − γ1 λα + νλβ − γ2 λβ + 1) (νλα −γ1 λα +νλβ −γ2 λβ +1)

(R2 − R1 )

λα

(Γ (ν − γ1 + 1)) Then

λβ

(Γ (ν − γ2 + 1))

.

(149)

γ2 λ ν λ Z " γ1 ∂∗R1 f1 (x) λα ∂∗R f (x) β ∂∗R f (x) ν 1 1 1 2 + ∂rγ1 ∂rγ2 ∂rν A γ1 λα ν γ2 λν # ∂∗R1 f1 (x) λβ ∂∗R f (x) f (x) ∂ 2 2 1 ∗R1 dx ∂rγ2 ∂rγ1 ∂rν ≤ T (R2 − R1 ) h

2(λα +λν )

M1

2λβ

+ M1

π N/2 Γ (N/2) 2λβ

+ M2

2(λα +λν )

+ M2

i

.

(150)

Proof. Apply Theorem 38 for every ω ∈ S N −1 , here p (r) = rN −1 , r ∈ [R1 , R2 ] . It goes as the proof of Theorem 66. Finally use Theorem 64 and Proposition 60. 

37

78

ANASTASSIOU: FRACTIONAL INEQUALITIES

5.3

Results involving several function

We need to make Assumption 74. Let ν ≥ γi + 1, γi ≥ 0, i = 1, 2, n := dνe , fj : A → R with fj ∈ L1 (A) , j = 1, . . . , M, M ∈ N, where RN ⊇ A := B (0, R2 ) − B (0, R1 ), 0 < R1 < R2 , N ≥ 2. Assume that fj (·ω) ∈ AC n ([R1 , R2 ]) for all ω ∈ S N −1 , and

ν ∂∗R fj (·ω) 1 ∈ L∞ (R1 , R2 ) , for all ω ∈ S N −1 ; j = 1, . . . , M. Further assume ν ∂r ν ∂∗R f (x) j 1 that ∈ L∞ (A) , j = 1, . . . , M. More precisely, for these r ∈ [R1 , R2 ] , ∂r ν N −1 ν ∀ω ∈ S , for which D∗R f (rω) takes real values, there exists Mj > 0 such 1 j

that

that ν D∗R fj (rω) ≤ Mj , for j = 1, . . . , M. 1

(151)

We suppose that ∂ j fj (R1 ω) = 0, k = 0, 1, . . . , n − 1, ∂rk ∀ω ∈ S N −1 ; j = 1, . . . , M. Let λν > 0, and λα , λβ ≥ 0, such that λν < p, where p > 1. If γ1 = 0 we set λα = 1 and if γ2 = 0 we set λβ = 1. We give Theorem 75. Let fj , j = 1, . . . , M, as in Assumption 74. Let λν > 0, and λα > 0; λβ ≥ 0, p := λα + λν > 1. Set Z w p 1−N (ν−γk −1) (p−1) Pk (w) := (w − t) t( p−1 ) dt, (152) R1

k = 1, 2, R1 ≤ w ≤ R2 , A (w) :=

w(N −1)(1−

λν p

λ p−1 ) (P (w))λα ( p−1 p ) (P2 (w)) β ( p ) 1 λα

(Γ (ν − γ1 )) Z

,

(153)

!λα /p

R2

A0 (R2 ) :=

λβ

(Γ (ν − γ2 )) p/λα

(A (w))

dw

.

(154)

R1

Take the case of λβ = 0. Then ν λ M Z γ1 X ∂∗R1 fj (x) λα ∂∗R f (x) ν 1 j dx ∂rγ1 ∂rν j=1

A

 λν λν ( p ) ≤ A0 (R2 ) |λβ =0 p    p M Z ν X ∂∗R f (x) 1 j dx  .  ∂rν 

j=1

A

38



(155)

ANASTASSIOU: FRACTIONAL INEQUALITIES

79

Proof. As in Theorem 68, based on Theorem 45.  We continue with Theorem 76. All basic assumptions as in Theorem 75. Let λν > 0, λα = 0; λβ > 0, p := λν + λβ > 1, P2 defined by (152) . Now it is λν λ p−1 w(N −1)(1− p ) (P2 (w)) β ( p ) , (156) A (w) := λ (Γ (ν − γ2 )) β !λβ /p Z R2

A0 (R2 ) :=

p/λβ

(A (w))

dw

.

(157)

R1

Denote



2λβ /λν − 1, if λβ ≥ λν , 1, if λβ ≤ λν .

δ3 := Call

λβ /p

ϕ2 (R2 ) := A0 (R2 ) 2 Then



λν p

λν /p

(158)

λ /p

δ3 ν .

(159)

 " ν λ Z M −1 γ2 X ∂∗R1 fj+1 (x) λβ ∂∗R f (x) ν 1 j +  ∂rγ2 ∂rν A

j=1

γ2 ν λν #) ∂∗R1 fj (x) λβ ∂∗R f (x) j+1 1 + ∂rγ2 ∂rν " γ λ ν λ 2 ∂∗R f (x) β ∂∗R f (x) ν 1 M 1 1 + ∂rγ2 ∂rν γ2 ν λν #) ∂∗R1 f1 (x) λβ ∂∗R f (x) M 1 dx ≤ ∂rγ2 ∂rν    M Z ν X ∂∗R1 fj (x) p dx  . 2ϕ2 (R2 )  ∂rν A j=1

(160)

Proof. As in Theorem 68, based on Theorem 46.  We present the general case Theorem 77. All basic assumptions as in Theorem 75. Here λν , λα , λβ > 0, p := λα + λβ + λν > 1, Pk as in (152) , A as in (153) . Here Z

β ! λα +λ p

R2

A0 (R2 ) :=

p/(λα +λβ )

(A (w))

dw

,

(161)

R1

(  λα +λβ  2 λν − 1, if λα + λβ ≥ λν , γ e1 := 1, if λα + λβ ≤ λν . 39

(162)

80

ANASTASSIOU: FRACTIONAL INEQUALITIES

Put

(λν /p) λν ϕ3 (R2 ) := A0 (R2 ) (λα + λβ ) p    λα +λβ  λν ( (λν /p) p p ) +2 λα (e γ1 λβ ) . 

(163)

Then  Z

M −1 X

 A

j=1

" γ λ γ2 λ ν λ 1 ∂∗R f (x) β ∂∗R f (x) α ∂∗R f (x) ν 1 j+1 1 j 1 j + ∂rγ1 ∂rγ2 ∂rν

γ2 γ1 λ ν λ # ∂∗R1 fj (x) λβ ∂∗R fj+1 (x) α ∂∗R fj+1 (x) ν 1 1 + ∂rγ2 ∂rγ1 ∂rν " γ λ γ2 λ ν λ 1 ∂∗R f (x) β ∂∗R f (x) α ∂∗R f (x) ν 1 M 1 1 1 1 + ∂rγ1 ∂rγ2 ∂rν γ2 γ1 λ ν λ ## ∂∗R1 f1 (x) λβ ∂∗R f (x) α ∂∗R f (x) ν 1 M 1 M dx ≤ ∂rγ2 ∂rγ1 ∂rν    M Z ν X ∂∗R1 fj (x) p dx  . 2ϕ3 (R2 )  ∂rν j=1

(164)

A

Proof. As in Theorem 68, based on Theorem 47.  We show the special important case next. Theorem 78. Let all as in Assumption 74 without λν there. Here γ2 = γ1 +1, and let λα > 0, λβ := λα+1 , 0 < λα+1 < 1, such that p := λα +λα+1 > 1. Denote  (λ /λ ) 2 α α+1 − 1, if λα ≥ λα+1 , θ3 := (165) 1, if λα ≤ λα+1 ,  (1 − λα+1 ) L (R2 ) := 2 (N − λα+1 ) !# (1−λ λα+1 α+1 )  N −λα+1 N −λα+1 θ3 λα+1 1−λα+1 1−λα+1 R2 − R1 , (166) λα + λα+1 and Z

R2

p (ν−γ1 −1)( p−1 )

(R2 − t)

P (R2 ) :=

1−N

t( p−1 ) dt,

(167)

2(p−1) .

(168)

R1 (p−1)

Φ (R2 ) := L (R2 )

P1 (R2 ) p (Γ (ν − γ1 ))

40

!

ANASTASSIOU: FRACTIONAL INEQUALITIES

Then

81

  λα+1 γ1 +1 Z M −1 γ1 X ∂∗R1 fj (x) λα ∂∗R f (x) j+1 1  +  ∂rγ1 ∂rγ1 +1 A

j=1

λα+1  γ1 γ1 +1  ∂∗R1 fj+1 (x) λα ∂∗R f (x) j 1  + ∂rγ1 +1  ∂rγ1  λα+1 γ1 γ1 +1 ∂ f (x) λα ∂∗R fM (x) 1  ∗R1 1 + ∂rγ1 ∂rγ1 +1 λα+1  γ1 γ1 +1  ∂∗R1 fM (x) λα ∂∗R f1 (x) 1  dx ≤ ∂rγ1 +1  ∂rγ1    M Z ν X ∂∗R1 fj (x) p dx  . 2Φ (R2 )  ∂rν j=1

(169)

A

Proof. As in Theorem 68, based on Theorem 48.  We study the L∞ case next. We need to make Assumption 79. Let ν ≥ γi + 1, γi ≥ 0, i = 1, 2, n := dνe , fj : A → R with fj ∈ L1 (A) , j = 1, . . . , M, M ∈ N, where RN ⊇ A := B (0, R2 ) − B (0, R1 ), 0 < R1 < R2 , N ≥ 2. Assume that fj (·ω) ∈ AC n ([R1 , R2 ]) for all ω ∈ S N −1 , and

ν ∂∗R fj (·ω) 1 ∈ L∞ (R1 , R2 ) , for all ω ∈ S N −1 ; j = 1, . . . , M. Further assume ∂r ν ν ∂∗R fj (x) 1 ∈ L∞ (A) , j = 1, . . . , M. More precisely, for these r ∈ [R1 , R2 ] , that ∂r ν N −1 ν ∀ω ∈ S , for which D∗R f (rω) takes real values, there exists Mj > 0 such 1 j

that

that ν D∗R fj (rω) ≤ Mj , for j = 1, . . . , M. 1

(170)

We suppose that ∂ k fj (R1 ω) = 0, k = 0, 1, . . . , n − 1, ∂rk ∀ω ∈ S N −1 ; j = 1, . . . , M. Let λν , λα , λβ ≥ 0. If γ1 = 0 we set λα = 1 and if γ2 = 0 we set λβ = 1. The last main result follows. Theorem 80. All as in Assumption 79. Set R2N −1 · T (R2 ) := (νλα − γ1 λα + νλβ − γ2 λβ + 1) (νλα −γ1 λα +νλβ −γ2 λβ +1)

(R2 − R1 )

λα

(Γ (ν − γ1 + 1))

λβ

(Γ (ν − γ2 + 1))

41

.

(171)

82

ANASTASSIOU: FRACTIONAL INEQUALITIES

Then  " γ2 λ ν λ Z M −1 γ1 X ∂∗R1 fj (x) λα ∂∗R f (x) β ∂∗R f (x) ν 1 j+1 1 j + ∂rγ1  ∂rγ2 ∂rν A

j=1

γ2 γ1 λ ν λ #) ∂∗R1 fj (x) λβ ∂∗R fj+1 (x) α ∂∗R fj+1 (x) ν 1 1 + ∂rγ2 ∂rγ1 ∂rν " γ λ γ2 λ ν λ 1 ∂∗R f (x) β ∂∗R f (x) α ∂∗R f (x) ν 1 M 1 1 1 1 + ∂rγ1 ∂rγ2 ∂rν γ2 γ1 λ ν λ #) ∂∗R1 f1 (x) λβ ∂∗R f (x) α ∂∗R f (x) ν 1 M 1 M dx ≤ ∂rγ2 ∂rγ1 ∂rν   −1 n M o X 2π N/2 2λ 2(λ +λ ) T (R2 ) Mj α ν + Mj β .   Γ (N/2)

(172)

j=1

Proof. Based on Theorem 52; here p (r) = rN −1 , r ∈ [R1 , R2 ] , apply (97) ∀ω ∈ S N −1 . It goes as the proof of Theorem 66. Finally use Theorem 64 and Proposition 60. 

6

Applications

We need Corollary 81. (to Theorem 68, f1 = f2 ) All as in Theorem 68. It holds ν λ Z γ1 ∂∗R1 f1 (x) λα ∂∗R f (x) ν 1 1 dx ≤ ∂rγ1 ∂rν A A0 (R2 ) |λβ =0





 λν Z ν  ∂∗R1 f1 (x) p λν ( p ) dx . p ∂rν A

(173)

So setting λα = λν = 1, p = 2 in (173) , we obtain in detail Proposition 82. Let ν ≥ γ + 1, γ ≥ 0, n := dνe , f : A → R with f ∈ L1 (A) , where A := B (0, R2 ) − B (0, R1 ) ⊆ RN , N ≥ 2, 0 < R1 < R2 , . Assume that f (·ω) ∈ AC n ([R1 , R2 ]) , ∀ω ∈ S N −1 , and that

ν ∂∗R f (·ω) 1 ∂r ν

∂ ν f (x) 1 ∀ω ∈ S . Further assume that ∗R∂r ∈ L∞ (A) . More ν N −1 ν , for which D∗R r ∈ [R1 , R2 ] , ∀ω ∈ S f (rω) takes real 1 N −1

such that ν D∗R f (rω) ≤ M1 . 1 42

∈ L∞ (R1 , R2 ) ,

precisely, for these values, ∃ M1 > 0

ANASTASSIOU: FRACTIONAL INEQUALITIES

83

Suppose that ∂ j f (R1 ω) = 0, j = 0, 1, . . . , n − 1, ∀ω ∈ S N −1 . ∂rj Set

Z

r

2(ν−γ−1) (1−N )

(r − t)

P (r) :=

t

dt, R1 ≤ r ≤ R2 ,

(174)

R1

r(

N −1 2

) pP (r) A (r) := , Γ (ν − γ) !1/2 Z R2 2 e (A (r)) dr . A0 (R2 ) :=

(175)

(176)

R1

Then

ν Z γ ∂∗R1 f (x) ∂∗R f (x) 1 ∂rγ ∂rν dx ≤ A 2 ! Z  ν ∂∗R1 f (x) −1/2 e0 (R2 ) 2 A dx . ∂rν A

(177)

When γ = 0 we get in detail Proposition 83. Let ν ≥ 1, n := dνe , f : A → R with f ∈ L1 (A) , where A := B (0, R2 ) − B (0, R1 ) ⊆ RN , N ≥ 2, 0 < R1 < R2 , . Assume that f (·ω) ∈ AC n ([R1 , R2 ]) , ∀ω ∈ S N −1 , and that

ν ∂∗R f (·ω) 1 ∂r ν

∂ ν f (x) 1 ∈ L∞ (A) . More ∀ω ∈ S . Further assume that ∗R∂r ν N −1 ν f (rω) takes real r ∈ [R1 , R2 ] , ∀ω ∈ S , for which D∗R 1 N −1

∈ L∞ (R1 , R2 ) ,

precisely, for these values, ∃ M1 > 0

such that ν D∗R f (rω) ≤ M1 . 1 Suppose that ∂ j f (R1 ω) = 0, j = 0, 1, . . . , n − 1, ∀ω ∈ S N −1 . ∂rj Set

Z

r

2(ν−1) (1−N )

(r − t)

P0 (r) :=

t

dt, R1 ≤ r ≤ R2 ,

(178)

R1

r(

A∗ (r) := Z

N −1 2

) pP (r) 0 , Γ (ν) !1/2

R2

ee A 0 (R2 ) :=

2

(A∗ (r)) dr R1

Then

ν ∂ f (x) |f (x)| ∗R1 ν dx ≤ ∂r A

Z

43

(179)

.

(180)

84

ANASTASSIOU: FRACTIONAL INEQUALITIES

ee −1/2 A 0 (R2 ) 2

Z  A

ν ∂∗R f (x) 1 ∂rν

2

! dx .

(181)

Based on Corollary 35 we give Proposition 84. Let ν ≥ γ + 1, γ ≥ 0, n := dνe , f : A → R with f ∈ L1 (A) , where A := B (0, R2 ) − B (0, R1 ) ⊆ RN , N ≥ 2, 0 < R1 < R2 . Assume that f (·ω) ∈ AC n ([R1 , R2 ]) , ∀ω ∈ S N −1 , and that ∀ω ∈ S N −1 . Suppose that

ν ∂∗R f (·ω) 1 ∂r ν

∈ L∞ (R1 , R2 ) ,

∂ j f (R1 ω) = 0, j = 0, 1, . . . , n − 1, ∀ω ∈ S N −1 . ∂rj Then

Z

r

γ D

1)

∗R1 f

R1

Z

r

R1

ν (tω) D∗R f (tω) dt ≤ 1

! (ν−γ) (r − R1 ) √ √ 2Γ (ν − γ) ν − γ 2ν − 2γ − 1  2 ν D∗R1 f (tω) dt , all R1 ≤ r ≤ R2 , ∀ω ∈ S N −1 .

Z

r

(182)

2) When γ = 0 we get ν |f (tω)| D∗R f (tω) dt ≤ 1

R1



ν

(r − R1 ) √ √ 2Γ (ν) ν 2ν − 1

r

 Z

R1

ν D∗R f 1

 2 (tω) dt ,

(183)

all R1 ≤ r ≤ R2 , ∀ω ∈ S N −1 . In particular we have Z R2 ν 3) f (rω) dr ≤ |f (rω)| D∗R 1 R1



ν

(R2 − R1 ) √ √ 2Γ (ν) ν 2ν − 1

 Z

R2

R1

ν D∗R f 1

! 2 (rω) dr , ∀ω ∈ S N −1 .

(184)

Next we apply Proposition 84, see (183) for proving uniqueness of solution in a PDE initial value problem on A. Theorem 85. Let ν > 1, ν ∈ / N, n := dνe , f : A → R with f ∈ L1 (A) , where A := B (0, R2 ) − B (0, R1 ) ⊆ RN , N ≥ 2, 0 < R1 < R2 , . Assume that f (·ω) ∈ AC n ([R1 , R2 ]) , ∀ω ∈ S N −1 , and that N −1

∀ω ∈ S . Further assume ν ≤ M1 , ∀r with D∗R f (rω) 1

ν D∗R f (x) 1 ∈ ∂r ν ∈ [R1 , R2 ] ,

ν ∂∗R f (·ω) 1 ∂r ν

∈ AC ([R1 , R2 ]) ,

L∞ (A) , such that there exists M1 > 0 ∀ω ∈ S N −1 .Suppose that

∂ j f (R1 ω) = 0, j = 0, 1, . . . , n − 1, ∀ω ∈ S N −1 . ∂rj 44

ANASTASSIOU: FRACTIONAL INEQUALITIES

Consider the PDE ∂ ∂r



ν ∂∗R f (x) 1 ∂rν

85

 = θ (x) f (x) ,

(185)

∀x ∈ A, where 0 6= θ : A → R is continuous. If (185) has a solution then it is unique. Proof. We rewrite (185) as   ν ∂∗R1 f (rω) ∂ = θ (rω) f (rω) , ∂r ∂rν

(186)

 valid ∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 and 0 6= θ : [R1 , R2 ] × S N −1 → R is continuous. Assume f1 and f2 are solution to (185) , then   ν ∂∗R1 f1 (rω) ∂ = θ (rω) f1 (rω) , (187) ∂r ∂rν and ∂ ∂r



ν ∂∗R f (rω) 1 2 ∂rν

 = θ (rω) f2 (rω) ,

∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Call g := f1 − f2 , thus by subtraction in (187) we get  ν  ∂∗R1 g (rω) ∂ = θ (rω) g (rω) , ∂r ∂rν

(188)

(189)

∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Of course ∂ j g (R1 ω) = 0, j = 0, 1, . . . , n − 1, ∀ω ∈ S N −1 . ∂rj Consequently we have  ν   ν  ∂∗R1 g (rω) ∂ ∂∗R1 g (rω) ∂rν ∂r ∂rν  ν  ∂∗R1 g (rω) = θ (rω) g (rω) , ∂rν ∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Hence   ν  Z r ν ∂∗R1 g (tω) ∂∗R1 g (tω) ∂ dt ∂rν ∂r ∂rν R1  ν  Z r ∂∗R1 g (tω) = θ (tω) g (tω) dt, ∂rν R1 45

(190)

(191)

86

ANASTASSIOU: FRACTIONAL INEQUALITIES

∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Therefore we find  ∂ ν g(tω) 2 r ∗R1  ν  Z r ∂∗R1 g (tω) ∂r ν θ (tω) g (tω) dt, = 2 ∂rν R1

(192)

R1

∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . ∂ν

g(R1 ω)

Notice that ∗R1∂rν = 0, ∀ω ∈ S N −1 , see (110) . Consequently we find 

2

ν ∂∗R g (tω) 1 θ (tω) g (tω) ∂rν R1 ν Z r ∂∗R1 g (tω) dt ≤ 2 kgk∞ |g (tω)| ∂rν R1  ν  (183) kθk∞ (r − R1 ) ≤ √ √ Γ (ν) ν 2ν − 1 Z r  2 ν D∗R1 g (tω) dt ≤

ν ∂∗R g (rω) 1 ∂rν

Z = 2

r





dt (193) (194)

R1



ν

kθk∞ (R2 − R1 ) √ √ Γ (ν) ν 2ν − 1

∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Call

 Z

r

R1

2 ν D∗R g (tω) 1

 dt ,

(195)

ν

K :=

kθk∞ (R2 − R1 ) > 0. √ √ Γ (ν) ν 2ν − 1

(196)

So we have proved that 

ν ∂∗R g (rω) 1 ∂rν

2

Z

r

≤K R1

2 ν D∗R g (tω) 1

 dt ,

(197)

ν ∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Here D∗R g (·ω) ∈ C ([R1 , R2 ]) , ∀ω ∈ S N −1 . 1 2 ν ν Hence by Gr¨ onwall’s inequality we get D∗R g (rω) ≡ 0, so that D∗R g (rω) ≡ 1 1  ∂ ν g(rω)  ∗R1 ∂ N −1 ≡ 0, ∀r ∈ [R1 , R2 ] , ∀ω ∈ 0, ∀r ∈ [R1 , R2 ] , ∀ω ∈ S . Thus ∂r ∂r ν

S N −1 . And by (188) we have θ (rω) g (rω) ≡ 0, ∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 , implying g (rω) ≡ 0, ∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Hence proving f1 (rω) = f2 (rω) , ∀r ∈ [R1 , R2 ] , ∀ω ∈ S N −1 . Thus f1 (x) = f2 (x) , ∀x ∈ A, hence proving the claim.  We give the very important 46

ANASTASSIOU: FRACTIONAL INEQUALITIES

87

Remark 86. From Corollary 12 we saw that: for ν ≥ 0, n := dνe , f ∈ AC n ([a, b]) , given that Daν f (x) exists in R, ∀x ∈ [a, b] , and f (k) (a) = 0, ν k = 0, 1, . . . , n−1, imply that D∗a f = Daν f. Also we saw in Theorem 16, 17, that by adding to the assumptions of Theorem 16 that ” there exists Daν f (x) ∈ R, ∀x ∈ [a, b] ”, we can rewrite the conclusions of Theorem 16, that is getting the conclusions of Theorem 17, in the language of Riemann-Liouville fractional derivatives. Notice there, under the above additional assumption that also holds γ f (x) , ∀x ∈ [a, b] . Daγ f (x) = D∗a Theorem 16 is where is based the whole article. So by adding to the assumptions of all of our results here for all functions involved that ” there exists Daν f (x) ∈ R, ∀x ∈ [a, b] ” we can rewrite them all in terms of Riemann-Liouville fractional derivatives. Accordingly for the case ν of spherical shell we need to add ” there exists DR f (rω) ∈ R, ∀r ∈ [R1 , R2 ] , 1 N −1 for each ω ∈ S ”, and all can be rewritten in terms of Riemann-Liouville radial fractional derivatives. So as examples next we present only few of all these can be rewritten results. We present Theorem 87. Let ν ≥ γ + 1, γ ≥ 0. Call n := dνe and assume f ∈ AC n ([a, b]) such that f (k) (a) = 0, k = 0, 1, . . . , n − 1, and ∃Daν f (x) ∈ R, ∀x ∈ [a, b] with Daν f ∈ L∞ (a, b) . Let p, q > 1 such that p1 + 1q = 1, a ≤ x ≤ b. Then Z x |Daγ f (ω)| |(Daν f ) (ω)| dω ≤ a

(x − a)

pν−pγ−p+2 p

√  1/p q 2 Γ (ν − γ) ((pν − pγ − p + 1) (pν − pγ − p + 2)) Z x 2/q q ν · |Da f (ω)| dω .

(198)

a

Proof. Similar to Theorem 18.  The converse result follows. Theorem 88. Let ν ≥ γ + 1, γ ≥ 0. Call n := dνe and assume f ∈ AC n ([a, b]) such that f (k) (a) = 0, k = 0, 1, . . . , n − 1, and ∃Daν f (x) ∈ R, ∀x ∈ [a, b] with Daν f, D1ν f ∈ L∞ (a, b) . Suppose that Daν f is of fixed sign a.e in a [a, b] . Let p, q such that 0 < p < 1, q < 0 and p1 + 1q = 1, a ≤ x ≤ b. Then Z x |Daγ f (ω)| |Daν f (ω)| dω ≥ a

(x − a)

pν−pγ−p+2 p

√  1/p q 2 Γ (ν − γ) ((pν − pγ − p + 1) (pν − pγ − p + 2)) Z x 2/q q ν · |Da f (ω)| dω . a

47

(199)

88

ANASTASSIOU: FRACTIONAL INEQUALITIES

Proof. As in Theorem 20.  We present Theorem 89. Let ν ≥ γi + 1, γi ≥ 0, i = 1, 2, n := dνe , and f1 , f2 ∈ (j) (j) AC n ([a, b]) such that f1 (a) = f2 (a) = 0, j = 0, 1, . . . , n − 1, a ≤ x ≤ b. 1 , q (t) ∈ L∞ (a, b) . Consider also p (t) > 0 and q (t) ≥ 0, with all p (t) , p(t) ν ν Further assume ∃Da fi (x) ∈ R, ∀x ∈ [a, b] and Da fi ∈ L∞ (a, b) , i = 1, 2. Let λν > 0 and λα , λβ ≥ 0 such that λν < p, where p > 1. Here Pk is as in (39) , A (ω) is as in (40) , A0 (x) as in (41) , δ1 as in (42) . If λβ = 0, we obtain that Z x h λ λ q (ω) |Daγ1 f1 (ω)| α |Daν f1 (ω)| ν + a λα

|Daγ1 f2 (ω)|

A0 (x) |λβ =0 x

Z

p (ω) [|Daν f1

λν

|Daν f2 (ω)|





p

(ω)| +

λν λα + λ ν |Daν f2

i

dω ≤

(200)

λν /p δ1 p

ν ( λα +λ ) p

.

(ω)| ] dω

a

Proof. As in Theorem 30.  Corollary 90. (All as in Theorem 89, λβ = 0, p (t) = q (t) = 1, λα = λν = 1, p = 2.) In detail: (j) (j) Let ν ≥ γ1 + 1, γ1 ≥ 0, n := dνe , f1 , f2 ∈ AC n ([a, b]) : f1 (a) = f2 (a) = ν 0, j = 0, 1, . . . , n − 1, a ≤ x ≤ b; ∃Da fi (x) ∈ R, ∀x ∈ [a, b] with Daν fi ∈ L∞ (a, b) , i = 1, 2. Then Z x [|(Daγ1 f1 ) (ω)| |(Daν f1 ) (ω)| + a

|(Daγ1 f2 ) (ω)| |(Daν f2 ) (ω)|] dω ≤ (ν−γ1 )

(201)

(x − a) √ √ 2Γ (ν − γ1 ) ν − γ1 2ν − 2γ1 − 1 Z

x

h

2 ((Daν f1 ) (ω))

+

2 ((Daν f2 ) (ω))

i

!

 dω ,

a

all a ≤ x ≤ b. We need Definition 91. Let F : A → R, ν ≥ 0, n := dνe such that F (·ω) ∈ AC n ([R1 , R2 ]) , for all ω ∈ S N −1 . We call the Riemann-Liouville radial fractional derivative the following function Z r ν ∂R F (x) 1 ∂n n−ν−1 1 := (r − t) F (tω) dt, (202) ∂rν Γ (n − ν) ∂rn R1 48

ANASTASSIOU: FRACTIONAL INEQUALITIES

89

where x ∈ A, i.e. x = rω, r ∈ [R1 , R2 ] , ω ∈ S N −1 . Clearly 0 ∂∗R F (x) 1 = F (x) , ∂r0 and ν ∂R F (x) ∂ ν F (x) 1 = , if ν ∈ N. ∂rν ∂rν We give Proposition 92. Let ν ≥ γ + 1, γ ≥ 0, n := dνe , f : A → R with f ∈ L1 (A) , where A := B (0, R2 ) − B (0, R1 ) ⊆ RN , N ≥ 2, 0 < R1 < R2 . Assume ∂ ν f (·ω)

that f (·ω) ∈ AC n ([R1 , R2 ]) , ∀ω ∈ S N −1 , and that R1∂rν ∈ L∞ (R1 , R2 ) , ν ∀ω ∈ S N −1 . Further assume that ∃DR f (rω) ∈ R, ∀r ∈ [R 1 , R2 ] , for each 1 ∂ ν f (x)

1 ω ∈ S N −1 , with R∂r ν that ∃M1 > 0 such that Suppose that

N −1 ∞ (A) . We suppose ∀r ∈ [R1 , R2 ] and ∀ω ∈ S ∈ L Dν f (rω) ≤ M1 . R1

∂ j f (R1 ω) = 0, j = 0, 1, . . . n − 1, ∀ω ∈ S N −1 . ∂rj Set

Z

r

2(ν−γ−1) (1−N )

(r − t)

P (r) :=

t

dt, R1 ≤ r ≤ R2 ,

(203)

R1

also

N −1 2

) pP (r) A (r) := , Γ (ν − γ) !1/2 Z R2 2 e A0 (R2 ) := (A (r)) dr . r(

(204)

(205)

R1

Then

ν Z γ ∂R1 f (x) ∂R f (x) 1 ∂rγ ∂rν dx ≤ A e0 (R2 ) 2−1/2 A

Z  A

ν ∂R f (x) 1 ∂rν

Proof. Similarly as in Proposition 82.

49



2

! dx .

(206)

90

ANASTASSIOU: FRACTIONAL INEQUALITIES

References [1]

Aliprantis, Charalambos D., Burkinshaw, Owen, Principles of Real Analysis, Third Edition, Academic Press, Inc., San Diego, CA, 1998.

[2]

G. A. Anastassiou, Quantitative Approximations, Chapman & Hall/CRC, Boca Raton, New York, 2001.

[3]

G. A. Anastassiou, Opial type inequalities involving fractional derivatives of two functions and applications, Computers and Mathematics with Applications, 48 (2004), 1701-1731.

[4]

Anastassiou, George A. Fractional Opial inequalities for several functions with applications, J. Comput. Anal. Appl. 7 (2005), no. 3, 233-259.

[5]

G. A. Anastassiou, Riemann-Liouville fractional multivariate Opial type inequalities on spherical shells, accepted for publication, Bulletin of Allahabad Math. Soc., India, 2007.

[6]

G. A. Anastassiou, Fractional Multivariate Opial type inequalities over spherical shells, Communications in Applied Analysis 11 (2007), no. 2, 201-233.

[7]

G. A. Anastassiou, Reverse Riemann-Liouville fractional Opial inequalities for several functions, Submitted 2007.

[8]

G. A. Anastassiou, Converse fractional Opial inequalities for several functions, Submitted 2007.

[9]

G. A. Anastassiou, Opial Type Inequalities Involving Riemann-Liouville Fractional derivatives of two functions, accepted for publication, Mathematics and Computer Modelling, 2007.

[10] G. A. Anastassiou, Riemann-Liouville Fractional Opial inequalities for several functions with Applications, accepted , International I. of Pure and Appl. Math.,2007. [11] G. A. Anastassiou, J. J. Koliha and J. Pecaric, Opial inequalities for fractional derivatives, Dynam. Systems Appl. 10 (2001), no. 3, 395-406. [12] G. A. Anastassiou, J. J. Koliha and J. Pecaric, Opial type Lp inequalities for fractional derivatives, Intern. Journal of Mathematics and Math. Sci., Vol. 31, no. 2 (2002), 85-95. [13] P. R. Bessack, On an integral inequality of Z. Opial, Trans. Amer. Math. Soc., 104 (1962), 470-475. [14] Caputo M. (1967), Linear Models of Dissipation Whose Q is Almost Frequency Independent-II, Geophys J Royal Astronom Soc 13:529-539.

50

ANASTASSIOU: FRACTIONAL INEQUALITIES

[15] Caputo M., Mainardi F. (1971a), A New Dissipation Model Based on Memory Mechanism, Pure and Appl Geophys 91:134-147. [16] Caputo M., Mainardi F. (1971b), Linear Models of Dissipation in Anelastic Solid, Rivista del Nuovo Cimento 1:161-198. [17] Kai Diethelm, Fractional Differential Equations, on line: http://www.tubs.de/˜diethelm/lehre/f-dgl02/fde-skript.ps.gz [18] G. D. Handley, J. J. Koliha and J. Pecaric, Hilbert-Pachpatte type integral inequalities for fractional derivatives, Fract. Calc. Appl. Anal. 4, Vol 1 (2001), 37-46. [19] Virginia Kiryakova, Generalized fractional calculus and applications, Pitman Research Notes in Math. Series, 301. Longman Scientific and Technical, Harlow; co-published in U.S.A with John Wiley and Sons, Inc., New York, 1994. [20] Kenneth Miller, B. Ross, An Introduction to the Fractional Calculus and fractional diferential equations, John Wiley and Sons, Inc., New York, 1993. [21] Keith Oldham, Jerome Spanier, The Fractional Calculus: Theory and Applications of Differentiation and Integration to arbitrary order, Dover Publications, New York, 2006. [22] Z. Opial, Sur une inegalit´e, Ann. Polon. Math. 8 (1960), 29-32. [23] H. L. Royden, Real Analysis, second edition, Macmillan, 1968, New York. [24] W. Rudin, Real and Complex Analysis, International Student Edition, Mc Graw Hill 1970, London, New York. [25] D. Stroock, A Concise Introduction to the Theory of Integration, 3 rd Edition, Birk¨ auser, Boston, Basel, Berlin, 1999. [26] E. T. Whittaker and G. N. Watson, A Course in Modern Analysis, Cambridge University Press, 1927. [27] D. Willett, The existence-uniqueness theorem for an n th order linear ordinary differential equation, Amer. Math. Monthly, 75 (1968), 174-178.

51

91

JOURNAL 92 OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 92-97,2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Asymptotics for Szeg¨ o polynomials with respect to a class of weakly convergent measures Michael Arciero1 , Lewis Pakula2 1 University 2 University

of New England, Biddeford ME, 04005, [email protected] of Rhode Island, Kingston RI, 02881, [email protected]

Abstract Recent results of the author characterize limits for Szeg¨ o polynomials of fixed degree k with respect to measures which are weakly convergent to a sum of m < k point masses, with the measures formed by convolving the point masses with the Poisson and Fej´er kernels. Moreover, the limit polynomial is seen to be the same in each case. Here, we show that the Poisson kernel can be expressed as a convex combination of Fej´er kernels. Conjectures are made for a general class of kernels whose Fourier coefficients µ b(`) form convex functions of `. Keywords: Szeg¨o polynomial, orthogonal polynomial, frequency analysis, Poisson kernel, Fej´er kernel.

1

Introduction

Given a measure, µ, on the unit circle, the Szeg¨o polynomial of degree k with respect to µ, which we denote Pk (z, µ), is the polynomial in the complex variable z which attains the minimum Z π Z π iθ 2 min |p(e )| dµ(θ) = |Pk (eiθ , µ)|2 dµ(θ), (1) p∈Λk

−π

−π

where Λk is the set of monic polynomials of degree k. The Szeg¨o polynomials with respect to a measure µ form an orthogonal sequence, are uniquely defined if the degree is less than the number of points on which µ is supported, and can expressed as a ratio of matrix determinants or generated recursively using Levinson’s recursion. Szeg¨o polynomials have many applications and have been studied widely. See [4, 6, 10, 11] for background. Some results related to frequency analysis appear in [5, 7, 9, 8]. The motivation for the use of Szeg¨o polynomials in frequency analysis is loosely based on the observation that the spectral measure of a digital signal with strong sinusoidal components will be heavily weighted at the frequency locations θj , and in light of (1), one would expect Pk (z, µ) to have zeros near eiθj . Note that for 1

ARCIERO-PAKULA: SZEGO POLYNOMIALS

93

Table 1: Poisson and Fej´er kernels kernel Poisson:

Fej´er:

density ψh (θ) =

moments ψbh (`)

h

r|`|

1−r

1 − r2 |eiθ − r|2

· ¸2 1 sin(nθ/2) ψh (θ) = n sin(θ/2)

(1 −

|`| + ) n

1 n

supp(µ) = m < k, Pk (z, µ) is not uniquely defined since any polynomial with m zeros at the point mass locations will attain the minimum of zero in (1). In [1, 2] we consider measures formed by convolvingPthe Poisson and Fej´er kernels, respectively, with the sum of point masses m j=1 αj δθj , where δθj is the point mass measure at θj the αj are positive. Both are examples of approximate identities ψh with h → 0 as either r → 1 or n → ∞, respectively, as indicated in Table 1, where x+ = max{x, 0}. It is easy to see that for any approximate identity ψh , we have the weak-star limit lim ψh ∗

h→0

m X

αj δθj =

j=1

m X

αj δθj

(2)

j=1

On the other hand, for m < k, the associated Szeg¨o polynomials of fixed P degree k do not necessarily converge. That is, µh → m α δ does not j=1 j θj guarantee existence of the limit limh→0 Pk (z, µh ) even if µh converges strongly. (See [1] for an example.) P The main point of [1] and [2] is that the Pk (z, ψh ∗ m j=1 αj δθj ) do converge; moreover, the limit is the same for both kernels. We have the following Theorem 1.1 Let ψh be either the Fej´er or the Poisson kernel with the identifications in Table 1. Suppose δθj is the point mass at θ = θj with the θj distinct and αj > 0 for j = 1, 2, 3, ..., m. Then lim Pk (z, ψh ∗

h→0

m X

αj δθj ) = Pk−m (z, ν)

m Y j=1

j=1

2

(z − eiθj ),

(3)

94

ARCIERO-PAKULA: SZEGO POLYNOMIALS

where ν is the absolutely continuous measure with m

m

dν X Y = αj |eiθ − eiθp |2 . dθ j=1 p6=j

(4)

We seek to extend Theorem 1.1 to a larger class of kernels. We consider kernels that have moments ψbh (`) which are convex functions of ` for ` > 0, or which can be expressed as a convex combination of Fej´er kernels, and make a conjecture in each case. The motivation for this is that the Poisson and Fej´er kernels have moments which are convex functions (though those of that latter are not strictly so). Moreover, it is possible to expand the moments of the Poisson kernel in terms of those of the Fej´er. Specifically, we have Proposition 1.1 Let φn (θ) denote the Fej´er kernel for n = 1, 2, 3, ..., and define ar,n = (1 − r)2 nrn−1 . Then r

|`|

=

∞ X

ar,n φbn (`) .

n=1

Proof: We show this by writing the above as geometric series. Since φbn (−`) = φbn (`) we can assume ` ≥ 0 and write ∞ X

ar,n φbn (`) = (1 − r)2

n=1 2

= (1 − r)

∞ X

nrn−1 (1 −

n=1 ∞ X

nr

n−1

` + ) n 2

− (1 − r)

n=`+1

n=`+1

Regarding the first sum in (5), we have ∞ X n=`+1

`

nr

n−1

X 1 − nrn−1 = (1 − r)2 n=1 `

d X n 1 − = r (1 − r)2 dr n=0 3

∞ X

`rn−1 .

(5)

ARCIERO-PAKULA: SZEGO POLYNOMIALS

µ ¶ 1 d 1 − r`+1 = − (1 − r)2 dr 1−r ` r (1 + ` − `r) = . (1 − r)2

95

(6)

Regarding the second sum in (5), we have Ã∞ ! ∞ ` X X X `rn−1 = ` rn−1 − rn−1 n=`+1

µ

n=1

n=1 ¶ `

1 1−r − 1−r 1−r ` r = ` . 1−r = `

(7)

`

r The sum of (6) and (7) is (1−r) 2 , which, with (5), proves the proposition. An immediate consequence of Proposition 1.1 is the following.

Corollary 1.1 With φn and ar,n as in Proposition 1.1 the Poisson kernel can be expressed ∞ X 1 − r2 ψr (θ) = iθ = ar,n φn (θ) |e − r|2 n=1

2

Conjectures for a class of densities

Let f be a function with the following properties: 1. f (x) ≥ 0 for all x. 2. f (x) = f (−x). 3. f is convex and non-increasing for x > 0. Such functions satisfy the Polya criterion and thus are the characteristic functions of positive measures. (See, e.g., [3], p482.) We will call {an } and b ψ a sequence and density, respectively, of Polya type, if φ(n) = an = f (n) for some function f satisfying the Polya criterion. So the Poisson and Fej´er kernels are densities of Polya type for 0 < h < 1. We conjecture that Theorem 1.1 holds for all kernels of Polya type. 4

96

ARCIERO-PAKULA: SZEGO POLYNOMIALS

Conjecture 1 Let fh be a family of Polya type functions for a continuous or discrete parameter h on 0 < h < 1 with fh (0) = 1 and limh→0 fh (x) = 1 for all x, and suppose ψh is the measure with ψbh (`) = fh (`). Then (3) and holds for the kernel ψh , with ν given in (4). A variant of Conjecture 1 is motivated by the construction of Proposition 1.1 and the fact that the Fej´er kernel would seem to be a “base case” since its moments are linear rather than strictly convex. Indeed, one might suspect that any convex function of x can be expressed as a convex combination of the functions (1 − nx )+ . Let φn denote the Fej´er kernel as in TableP1 and suppose {an }∞ n=1 is ∞ a sequence of non-negative real numbers with n=1 an = 1, then ψ(θ) = P ∞ n=1 an φn (θ) is a density of Polya P type. Now suppose that AN := {aN,n } is a sequence of sequences with ∞ n=1 aN,n = 1 for each N = 1, 2, 3, ... and limN →∞ aN,n = 0 for each n. If the latter holds, we write AN → 0. We define sequences Ah for continuous parameter h → 0 similarly, and write Ah → 0. In either case, we simply write A → 0, and conjecture that Theorem 1.1 holds for ψA . Conjecture 2 Suppose A → 0 and ψA (θ) =

∞ X

ah,n φn (θ).

n=1

Then (1.1) holds for the kernel ψA ; that is, Pk (z, ψA ∗

m X

αj δθj ) → Pk−m (z, ν)

j=1

m Y

(z − eiθj )

j=1

. Remarks: With ar,n given in Corollary 1.1, A → 0 and ψAr is the Poisson kernel, while the Fej´er kernel corresponds to AN = {0, 0, 0, ..., 1, 0, 0, ...}, with 1 in the N -th position. We see from Table 1 that the moments of the Fej´er and Poisson kernels agree to first order in h. The Fej´er kernel may thus be thought of as a base case in this sense as well. The techniques in [1, 2] do not exploit this property, however. It is possible Polya-type kernels are contained in a larger class which includes those whose moments agree to first order. 5

ARCIERO-PAKULA: SZEGO POLYNOMIALS

References [1] M. Arciero, Limits for Szeg¨o polynomials in frequency analysis, J. Math. Anal. Appl. 304 (2005) 321-335. [2] M. Arciero, A limit theorem for Szeg¨o Polynomials with respect to convolution of point masses with the Fej´er kernel, J. Math. Anal. Appl., Vol. 327, No. 2, (2007) 908-918. [3] W. Feller, “An introduction to probability theory and its applications” Vol II 3ed., Wiley, 1971 [4] I.A. Geronimus, Polynomials Orthogonal on a Circle and Interval, New York, Pergamon Press, 1960. [5] W.B. Jones, O. Nj˚ asted, Applications of Szeg¨o Polynomials to Digital Signal Processing, Rocky Mountain Journal of Mathematics Vol. 21, No.1, WInter, 1991. [6] N. Levinson, The Wiener RMS (root mean square) error criterion in filter design and prediction, Journal of Math. and Physics, 25, 1947 pp. 261-268. [7] L. Pakula, Asymptotic zero distribution of orthogonal polynomials in sinusoidal frequency estimation, IEEE Transactions on Information Theory, Vol. IT 33, No.4, pp. 569-576, July 1987. [8] K. Pan, E.B. Saff, Asymptotics for zeros of Szego polynomials, Journal of Approximation Theory, Vol. 71, No. 3, pp. 239-251, Dec. 1992. [9] V. Petersen, Modification of a method using Szeg¨o polynomials in frequency analysis: the V-process, Journal of Computational and Applied Mathematics, Volume 133, Issues 1-2 August 2001, pp. 535-544. [10] B. Simon, Orthogonal Polynomial on the Unit Circle AMS Colloquium Series, American Mathematical Society, Providence, 2005. [11] N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, MIT Press, Cambridge, Wiley, New York, 1949.

6

97

JOURNAL 98 OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 98-109,2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

A computational approach to the determination of nets Hans Fetter and Juan H. Arredondo R. Departamento de Matem´aticas. Universidad Aut´onoma Metropolitana-Iztapalapa. Av. San Rafael Atlixco # 186. C. P. 09340. M´exico, D. F. M´exico. Keywords and phrases: Unfolding, dodecahedron, Shepard’s conjecture. 2000 Mathematics Subject Classification: 05B30, 52B05

Abstract An unfolding of a polyhedron consists of cutting the boundary of it along some of its edges in such a way that one can flatten out the remaining set in the plane in a single piece. An unfolding is a net if it does not overlap itself. No convex polyhedra has been found which does not have a net, though almost all of its unfoldings overlap. In the particular case of the Dodecahedron, we show by means of both theoretical and computational considerations that every unfolding is a net.

1

Introduction

There is an extensive study of unfoldable/foldable structures and applications are different and all very interesting. Among them, we may mention the Japanese art of paper folding and the utilization of unfoldable connected structures in aerospace. See for instance http://www.patentgenius.com/patent/6920733.html Even very regular polyhedra have an overlapping unfolding, although it is possible also to construct an unfolding of the polyhedron which is a net [3]. Furthermore, Schevon [2] shows for some class of polyhedra that almost all unfoldings of a polyhedron in the class overlaps. In the particular case of the Dodecahedron one can ask if it has an overlapping unfolding. The total number of unfoldings for the Dodecahedron is 43380. We give an answer to this question by means of theoretical and computational aids. See also [1] where a purely theoretical approach is considered.

1

FETTER-ARREDONDO: ABOUT NETS

Figure 1: (A): The Platonic solid. (B): Graphic representation of the Dodecahedron. This problem is related to Shepard’s conjecture which states that any convex polyhedron has at least one net. Up to date there has not been found a convex polyhedron for which every unfolding overlaps, thus reinforcing the conjecture.

2

Theoretical framework

The regular dodecahedron is the Platonic solid composed of 20 vertices, 30 edges and 12 pentagonal faces. See figure 1 (A). Definition 2.1. (i) An unfolding of a polyhedron consists of cutting the boundary of it along some of its edges in such a way that one can flatten out the remaining set in the plane in a single piece. (ii) An unfolding is a net if it does not overlap itself. Definition 2.2. A single chain is a set of faces belonging to an unfolding of a polyhedron and such that anyone of its elements shares

2

99

100

FETTER-ARREDONDO: ABOUT NETS

Figure 2: (A): Unfolding of a polyhedron. (B): Single chain of faces determining the overlapping. Obtained from the Wolfram Demonstrations Project. at least one edge and at most two edges. The number of faces in the single chain is called its length. Figure 2 (A) shows an unfolding of a polyhedron, which is not a net. 2 (B) exhibits the single chain determining this overlapping. In figure 3 we have an unfolding of the Dodecahedron without overlapping, and therefore it is a net. Our strategy to solve the problem we stated is quite simple and direct. From the lemma 2.1 below, to check if every unfolding is a net one needs only to look at all single chains of lengths one to twelve. This observation reduces the problem not merely because of the number of unfoldings (43380) is greater than that of the single chains (≈ 5000), but to construct and analyze is easier for a single chain than for an unfolding. Lemma 2.1. Every unfolding of the Dodecahedron with an overlapping has a self intersecting single chain of pentagons of length less or equal twelve. Proof: Suppose that U = { f1 , . . . , f12 } is an unfolding of the dodecahedron having an overlapping. Let fk and f` a pair of faces of

3

FETTER-ARREDONDO: ABOUT NETS

Figure 3: Unfolding of the Dodecahedron which is a net. U with non void intersection and not being adjacent in the unfolding U. Since every unfolding is a connected set, there must exist a subset {fk , . . . , f` } of U forming a single chain of pentagons of length less or equal to twelve. This proves the lemma. ¤ Theorem 2.1. If every single chain of pentagons of length less or equal to twelve does not intersect itself, then every unfolding of the Dodecahedron is a net. Proof: Since every unfolding that overlaps contains a self intersecting single chain, from the previous lemma the result follows at once. ¤

3

Main Theorem

Now we will prove that every single chain does not intersect itself. In order to show this, we use a computational algorithm that calculates and analyzes every single chain of the Dodecahedron. Theorem 3.1. Every unfolding of the Dodecahedron is a net.

4

101

102

FETTER-ARREDONDO: ABOUT NETS

Proof: From theorem 2.1 we need only to check that every single chain does not intersect itself. The strategy to do it is conceptually simple and is described in the next three steps. (i) First step: Let 1 ≤ k ≤ 12 and {f1 , . . . , fk } denote a single chain of the Dodecahedron. We note that for k = 1, . . . , 5 one can check just by inspection that no single chain intersects itself. (ii) Second step: We proceed iteratively. To check if there is a single chain of length 6 that intersects itself, we only have to verify that the pentagons at the extremes in the single chain do not intersect. Precisely, suppose that there is a single chain of length 6 that intersects itself. If fi and fj are not adjacent pentagons of the single chain having non void intersection, then there is a subset of pentagons {f`1 , f`2 , . . . , f`k } (2 ≤ k ≤ 6; f`1 = fi , f`k = fj ) of the original single chain that forms itself another single chain, which we denote by S 0 . This single chain intersects itself. From the fact that no single chain of length less or equal to 5 has an overlapping, S 0 must have length 6. Therefore, it is the original single chain and the pentagons overlapping are the extremes of the original one. This reduces the problem to check wether the pentagons at the extremes in every single chain of length 6 do not overlap. As a consequence, if we show that the extremes of a single chain of length 6 do not intersect, then we would have proved that no such single chains have an overlapping. This argument can be applied inductively for k = 7, . . . , 12. (iii) Third step: Construction and analysis of single chains. Our algorithm calculates systematically each single chain of lengths 6 ≤ k ≤ 12 and verifies that there is no overlapping. Therefore, the theorem is proved ¤

3.1

Description of the algorithm

We describe the algorithm. The program uses the parameter `=length of a single chain. The single chain is represented by a list S of ` numbers between 1 to 12. By a previous routine we calculate all single chains of lengths ` = 6, . . . , 12. The parameter D is a data matrix containing the structure of the Dodecahedron, as given in Table 1. The algorithm gives as output at a stage a list L of numbers that the computer associates to vectors connecting the centers of adjacent pentagons. See figures 4 and 9. Here we mean by the center of the

5

FETTER-ARREDONDO: ABOUT NETS

Figure 4: Directions 1’, 2’, 3’, 4’ and 5’ are associated with the vectors pointing to the center of adjacent pentagons. pentagon, the center of the circle circumscribing the pentagon. This fixes the coordinates of the center of each pentagon in the single chain. By another routine, we draw the graphic of the single chain S and determine if it intersects itself.   a b c d e 1 {2, a} {3, b} {4, c} {5, d} {6, e}  a j k l f 2 {1, a} {6, j} {7, k} {8, l} {3, f }     m n g b f 3 {8, m} {9, n} {4, g} {1, b} {2, f }     g o p h c 4 {1, c} {3, g} {9, o}  {10, p} {5, h}    d h q r i 5 {1, d} {4, h} {10, q} {11, r} {6, i}      {11, s} {7, t} {2, j}   e i s t j 6 {1, e} {5, i}   {11, y} {12, z} {8, u}   k t y z u 7 {2, k} {6, t}    l u Ω v m 8 {2, l} {7, u} {12, Ω} {9, v} {3, m}     n v β w o 9 {3, n} {8, v} {12, β} {10, w} {4, o}     p w γ x q 10 {4, p} {9, w} {12, γ} {11, x} {5, q}     r x δ y s 11 {5, r} {10, x} {12, δ} {7, y} {6, s}  z δ γ β Ω 12 {7, z} {8, Ω} {9, β} {10, γ} {11, δ} Table 1. Symbolic representation of the Dodecahedron.

6

103

104

FETTER-ARREDONDO: ABOUT NETS

Figure 5: Figure (a) shows the orientation chosen for a single chain. Figure (b) shows the orientations given for a face “up” and a face “down”, respectively. Note that on symmetry reasons one can assume that all single chains start at the pentagon labelled 1. See figure 1. Furthermore, one can see by similar arguments that it is enough to analyze single chains having faces 1, 5, 4, . . . and 1, 5, 10, . . .. The algorithm calculates in a previous routine all these single chains. We first show how the algorithm works with an example. Suppose that we want to analyze the single chain {1, 5, 4, 3, 8, 12, 7}. Certainly we can suppose that pentagon 1 is placed so that its edge d is horizontal. See figure 1. Then pentagon 5 has its edge d also horizontal and pentagons 1 and 5 look in an unfolding as in figure 5(a). It follows that 4 must be attached to 5 at the common edge h. See table 1 and figure 1. By simple geometry one realizes that pentagon 4 must be attached to 5 in such a way that 4 can be seen as a translation (not a rotation) of pentagon 1. Similarly, pentagon 3 must be attached to 4 and this can be done by translation of pentagon 5. In a similar way, 8 is attached to 3 by a translation of 1 and 12 is attached to 8 by a translation of 5. By simple geometric arguments, this continues to hold every time that one attaches a new pentagon at an unfolding. We assign to every pentagon an interior orientation depending on wether it is placed “up” or “down” as shown in figure 5(b). Now we add pentagon 4 to 5. See figure 5(a). 5 has already the orientation given by the condition that is placed “down” and its edge

7

FETTER-ARREDONDO: ABOUT NETS

Figure 6: Orientation of face 5. d is placed horizontally. From table 1, fifth row, the first five data correspond to the edges of pentagon 5. We know that edge d has already been assigned number 50 which yields the interior counterclockwise orientation (d, 50 ), (h, 40 ), (q, 30 ), (r, 20 ), (i, 10 ). See figures 1 and 6. We proceed to give an orientation to pentagon 4. This pentagon is placed “up” and the common edge h with pentagon 5 has already been assigned 40 . Looking at table 1, fourth column, the orientation obtained for pentagon 4 is (g, 20 ), (o, 10 ), (p, 50 ), (h, 40 ), (c, 30 ). See figures 1 and 7. Now we add pentagon 3, which must be placed “down” and the common edge with pentagon 4 is g. This edge has previously been assigned number 20 . Looking at table 1, third column, the orientation for pentagon 3 is (m, 40 ), (n, 30 ), (g, 20 ), (b, 10 ), (f, 50 )

8

105

106

FETTER-ARREDONDO: ABOUT NETS

Figure 7: Orientation of face 4.

Figure 8: Orientation of face 3.

9

FETTER-ARREDONDO: ABOUT NETS

Figure 9: (A) The algorithm gives the list L associated to the directions connecting the centers of adjacent pentagons. (B) Graphic of the single chain. See figures 1 and 8. Initially, the center of pentagon 1 is settled so that it coincides with the origin of the chosen system of coordinates. If one continues this process, one obtains the graph of the single chain {1, 5, 4, 3, 8, 12, 7} as it appears in an unfolding. See figure 9 (B). To determinate if there is overlapping at this single chain, it suffices to calculate the distance between the center of pentagon 7 to the origin of coordinates. If this distance is greater than twice the radius of the circle circumscribing pentagon 1, then there is no overlapping. The algorithm gives as output in the case of this single chain the list L = {50 , 40 , 20 , 40 , 10 , 50 }, which gives the vectors linking the centers of neighboring pentagons in the single chain. See figures 4 and 9.

10

107

108

FETTER-ARREDONDO: ABOUT NETS

Algorithm 3.1: Single Chain Analysis(`, S, D) Begin comment: Read a single chain S with elements S[1], . . . , S[`] S ← Read comment: The first direction is always 5’, and determines the orientation of S[2] = 5 {50 }

L= comment: Orient pentagon S[3] if S[3] =½4 P ent[3] = {(g, 20 ), (o, 10 ), (p, 50 ), (k, 40 ), (c, 30 )} then Append direction 2’ to L else if½S[3] = 10 P ent[3] = {(p, 20 ), (w, 10 ), (γ, 50 ), (x, 40 ), (q, 30 )} then Append direction 3’ to L end if comment: Append directions to L of other pentagons in S for ν1 = 4 to ν1 = ` do ξ1 = S[ν1 ] comment: Determine S[ν1 ] ∩ S[ν1 − 1] from data matrix D ρ1 := S[ν1 ] ∩ S[ν1 − 1] comment: Save direction assigned to ρ1 previously L ← Store direction assigned to ρ1 comment: Assign orientation to pentagon ξ1 compatible with orientation at previous face P ent[ν1 ] = {(P erm(1), 10 ), (P erm(2), 20 ), (P erm(3), 30 ), (P erm(4), 40 ), (P erm(5), 50 )} end for Return; end program

11

FETTER-ARREDONDO: ABOUT NETS

References [1] Fetter, H. and Arredondo, J. H. On the overlapping in the unfolding of the Dodecahedron. Accepted for publication in Journal of Mathematicas Sciences: Advances and Applications. [2] Schevon, C. Algorithms for Geodesics on Polytopes. Ph. D. Thesis, John Hopkins University, 1989. [3] Schlickenrieder, W. Nets of Polyhedra. Ph. D. Thesis. Technische Universit¨ at Berlin. 1997. [4] Shephard, G. C. Convex polytopes with convex nets. Math. Proc. Cambridge Philos. Soc. 78 (1975), no. 3, 389–403. [5] Hippenmeyer, Ch. Die Anzahl der inkongruenten ebenen Netze eines regulren Ikosaeders, Elem. Math. 34 (1979) 61-63. [6] Buekenhout, F. and Parker, M. The number of nets of the regular convex polytopes in dimension ≤ 4. Discrete Math. 186 (1998), no. 1-3, 69–94.

12

109

JOURNAL 110 OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 110-124,2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Kernel based Wavelets on S 3 S. Bernstein∗, S. Ebert† March 15, 2009

MSC 2000: 42C40 Keywords: Wavelets, approximate identities, zonal functions Wavelets are used to split up complicate signals into simpler parts reflecting different scales at different positions. The investigation of crystalline structures motivates studying wavelets on the three-dimensional sphere (S 3 ).

1 Introduction It is an interesting and though task to construct wavelets for a sphere, because there is no naive approach that works. The biggest obstacle seemed to be to define a dilation. There are several approaches and the most successful ones are those by the group of W. Freeden (see for example [8]) and those of J. P. Antoine and P. Vandergheynst ([1], [2]). The approach by J. P. Antoine and P. Vandergheynst is a group theoretical approach where as the approach by W. Freeden is done by singular integrals and zonal functions. As it is mentioned in [3] both approaches are some how equivalent, even though a deeper study of the connection of both approaches seems to be missing up to now. We are interested in wavelets on the 3 dimensional sphere, which should be motivated by the following problem. In texture analysis, i.e. the analysis of preferred crystallographic orientation, the orientation probability density function f representing the probability law of random orientations of crystal grains by volume is a major issue. In X–ray or neutron diffraction experiments spherical intensity distributions are measured which can be interpreted in terms of spherical probability distributions of distinguished crystallographic axes. In texture analysis, they are referred to as pole probability density functions. In general, if f is the orientation probability density function of a random rotation, then the spherical Radon transform Rf ∗ †

corresponding author, [email protected] both authors: Institute of Applied Analysis, Freiberg University of Mining and Technology, D-09599 Freiberg, Germany

1

BERNSTEIN-EBERT: ON WAVELETS

111

integrating f over all 1–dimensional great circles C ⊂ S 3 is simply the probability density function of g ∈ C | C ⊂ S 3 . In this way the spherical Radon transform provides an appropriate model of the X–ray diffraction experiment of texture analysis. To investigate the localisation properties of this spherical Radon transform we need wavelets on the 3-sphere. Our approach will use singular integrals and zonal functions.

2 Preliminaries 2.1 Surface hyperspherical harmonics If R4 is parameterized in polar co-ordinates: x1 x2 x3 x4

=r =r =r =r

sin θ3 sin θ2 cos θ1 , sin θ3 sin θ2 sin θ1 , sin θ3 cos θ2 , cos θ3 ,

where 0 ≤ θ2 , θ3 ≤ π and 0 ≤ θ1 ≤ 2π, then the hyperspherical harmonics on S 3 are defined by s 2(k + l)(k − l)! l+1 Ylm (θ1 , θ2 ) Ck−1 (cos θ3 ) sinl θ3 , Yklm = (2i)l l! π(k + l + 1)! where Ylm (θ1 , θ2 ) are the well-known spherical harmonics on S 2 . In these co-ordinates, the (not normalized) SO(4)-invariant measure on S 3 reads dσ = sin θ2 sin2 θ3 dθ1 dθ2 dθ3 . The basis {Yklm , 0 ≤ l ≤ k, −l ≤ m ≤ l} is in fact based on the reduction of the representation of SO(4) to representations of SO(3); each Yklm is an eigenfunction of an SO(3) subgroup of SO(4) which leaves a selected point of S 3 invariant.

2.2 Function spaces We have for 1 ≤ p < ∞, the Lebesgue space ( 1/p )  Z 1 |f (x)|p ds(x) Lp (S 3 ) = f : ||f ||p = 2π 2 S 3 and the weighted Lebesgue space for zonal functions, i.e. functions that depend only on the angle θ and with t = cos θ: (  1/p ) Z 1 √ 1 Lp1 [−1, 1] = f : ||f ||p,1 = √ |f (t)|p 1 − t2 dt π −1

2

112

BERNSTEIN-EBERT: ON WAVELETS

2.3 Funk-Hecke theorem The ultraspherical or Gegenbauer polynomials {Cn1 (t)} (identical to the Chebyshev polynomials of second kind Un ) build a complete orthonormal system on [−1, 1] with weight √ 2 1 − t . We denote by x · y = cos ∠(x, y) =: t the scalar product of x and y in R4 . Lemma 2.1. Let Ynl (x), l ∈ {1, 2, . . . , (n + 1)2 }, be an orthonormal basis of spherical harmonics of degree l. Furthermore, let Z cl (f ) := f (y)Ynl (y) dσ(y). S3

Then:

2

N (n+1) X X l=0

clj (f )Slj (x)

l=1

tends to f (x) as N → ∞, uniformly in x ∈ S 3 . Proof. The proof can be found in [10] or [12]. An immediate consequence is that the set of all spherical harmonics is dense in C(S 3 ) and hence in all spaces Lp (S 3 ), 1 ≤ p < ∞. It should also be mentioned a remarkable result: Given 1 ≤ p < ∞, p 6= 2, there exists an f ∈ Lp (S 3 ) such that the partial sums of the Laplace series for f do not converge in the Lp -norm! (see [5] and mentioned in [10].) This is equivalent to the same result for Fourier series. Theorem 2.2. (Addition theorem) Let {Ynl (x); l = 1, 2, . . . , (n+1)2 } be the set of (n+1)2 linear independent spherical harmonics of degree n and {Ynl (x)} orthonormal on S 3 , then 2

(n+1) 2π 2 X l 1 Y (x)Ynl (y), Cn (x · y) = n + 1 l=1 n

x, y ∈ S 3 .

A very important theorem is Theorem 2.3. (Funk-Hecke) Let f ∈ L11 [−1, 1], then Z 1 Z √ 4π f (t)Cn1 (t) 1 − t2 dt. f (x · y) Yn (x) dσ(x) = Yn (y) n + 1 −1 S3 Especially for n = 0 we obtain Z Z f (x · y) dσ(x) = 4π S3

1

√ f (t) 1 − t2 dt.

−1

The spherical harmonics {Ynl (x), l = 1, 2, . . . , (n + 1)2 } build a complete orthonormal system on S 3 in C(S 3 ) and Lp (S 3 ), 1 ≤ p < ∞. Let f ∈ L1 (S 3 ) then f can be expanded into a Laplace series of spherical harmonics, we have S(f ; x) ∼

∞ X n=0

3

Yn (f ; x),

BERNSTEIN-EBERT: ON WAVELETS

113

where Yn (f ; x) is given by n+1 Yn (f ; x) = 2π 2

Z S3

Cn1 (x · y) f (y) dσ(y),

n = 0, 1, 2, . . . .

and if f is zonal, i.e. depends only on the scalar product x · y = t, we have Z 1 √ 2 1 Yn (f ; t) = fˆ(n)Cn (t), where fˆ(n) = f (t) Cn1 (t) 1 − t2 dt. π −1

(1)

(2)

3 Spherical singular integrals The properties of the convolutions, i.e. Young’s inequalities for groups and homogeneous spaces can be found in the appendix. An important type of singular integrals are singular integrals of convolution type which are generated by a singular kernel. This technique has been introduced in the Euclidean space by Mikhlin and Pr¨oßdorf [11], earlier by Calderon and Zygmund [6], and had been extended to spheres by Dunkl [7] and Butzer [4]. The spherical convolution is based on the convolution with zonal functions. For the sphere S 3 this means following [4] Definition 3.1. For f ∈ L1 (S 3 ) and g ∈ L11 [−1, 1] the convolution h = f ∗ g of f and g is defined by Z 1 f (y)g(x · y) dσ(y). h(x) = 2 2π S 3 The convolution has the following properties: Remark 3.2. Let be f ∈ Lp (S 3 ) and g ∈ Lq1 , 1 ≤ p, q ≤ ∞, then is the convolution f ∗ g is defined almost everywhere on S 3 and we have Young’s inequality (see for example [13]): ||f ∗ g||r ≤ ||f ||p ||g||q,1 ,

1 1 1 = + − 1 ≥ 0, r q p

in particular ||f ∗ g||p ≤ ||f ||p ||g||1,1

and

||f ∗ g||q ≤ ||f ||1 ||g||q,1 .

The Laplace series expansion of f ∗ g has the form Yn (f ∗ g; x) =

1 gˆ(n)Yn (f ; x), n+1

where gˆ(n) and Yn (f ; x) are given by (2) and (1) respectively. Based on this spherical convolution we are now able to define singular integrals on the unit sphere.

4

114

BERNSTEIN-EBERT: ON WAVELETS

Definition 3.3. Let Kh ∈ L11 [−1, 1], with h ∈ (0, 1), be a family of kernels such that the coefficients satisfy Z √ 2 1 ˆ Kh (t)C01 (t) 1 − t2 dt = 1. Kh (0) = π −1 Then the family 1 Ih (f ) = Kh ∗ f = 2 2π

Z f (y)Kh (. · y) dσ(y) S3

is called a spherical singular integral while the family Kh , 0 ≤ h < 1, is called its kernel. A singular integral Ih is said to be an approximate identity in Lp (S 3 ), 1 ≤ p < ∞, if lim ||Ih f − f ||p = 0.

h→1−

Remark 3.4. In [4] the convolution integrals defined above called spherical singular integrals. Theorem 3.5. Let Kh ∈ L11 [−1, 1], with h ∈ (0, 1) be a family of kernels such that 1. it is a kernel of a spherical convolution integral Ih , that is, it satisfy Z 1 √ 1 ˆ Kh (0) = 2 Kh (t)C01 (t) 1 − t2 dt = 1; 2π −1 2. there exists a constant M ≥ 1 such that for all h ∈ (0, 1), we have Z √ 1 |Kh (t)| 1 − t2 dt ≤ M ; 2 2π S 3 3. for every fixed δ > 0, lim

sup

h→1− −1≤t≤1−δ

|Kh (t)| = 0.

Then for every f ∈ Lp (S 3 ), 1 ≤ p < ∞, the convolution integral Z 1 Ih (f ) = Kh ∗ f = 2 f (y)Kh (.·, y) dσ(y) 2π S 3 fulfills a) ||Ih f ||p ≤ M ||f ||p ; b) limh→1− ||Ih f − f ||p = 0.

5

BERNSTEIN-EBERT: ON WAVELETS

115

Proof. Proposition a) is a consequence of Young’s inequality and (2.). To prove the second proposition, we define S(x, δ) = {y ∈ S 3 : x · y ≤ 1 − δ, δ > 0}. Then Z 1 Ih (f, x) − f (x) = Kh (x · y)[f (y) − f (x)] dσ(y) 2π 2 S 3 \S(x, δ) Z 1 + 2 Kh (x · y)[f (y) − f (x)] dσ(y) 2π S(x, δ) = I1 + I2 . For I1 , we use the fact Z

Z

1

|Kh (x · y)|dσ(y) = 2π S n−1

|Kh (t)| (1 − t2 ) dt.

−1

By Young’s inequality, H¨older-Minkowki’s inequality and (3.) we have that a given  > 0, there exists h0 () such that Z 1 √ 2π ||I1 ||p ≤ 2||f ||p |Kh (t)| 1 − t2 dt 2 2π −1 2 ||f ||p sup |Kh (t)| < , ≤ π {−1≤t≤1−δ} for h > h0 (). Further, we get 1 ||I2 ||p ≤ 2 2π

Z |Kh (x · y)|||f (y) − f (x)|| dσ(y) ≤  · M, S(x,δ)

if h < 1 − δ, due to Kolmogorov’s compactness theorem (see for example [14] for Lp , the one-point set {f } being relatively compact. Hence, ||Ih f − f ||p ≤ ||I1 ||p + ||I2 ||p < (M + 1), for h < h0 (). Lemma 3.6. Assume that the kernel {Kh }0 0 (cf. [4]) and due to (7) ˆ R (n) = n + 1 lim Φ ∀n ∈ N0 . R→0+

From Z ΦR (t) = 1 +

∞ ∞X

R

ˆ ρ (n) C 1 (t) µ(ρ) dρ Ψ n

Z



=1+

Ψρ (t) µ(ρ) dρ R

n=1

and (8) we deduce that the kernel {ΦR } is uniformly bounded in the sense of (3) and thus Z ∞ X 1 n+1 lim ΦR (x · y) F (y) dσ(y) = Yn (F ; x) = F (x).  2 R→0+ 2π 2π 2 S3 n=1 Remark 4.7. The linear analysis here may be formally interpreted as bilinear analysis: Z ∞Z F (x) = (W T )(F )(ρ; y)δ(y · x) dσ(y) µ(ρ) dρ, 0

S3

where δ(x · y) =

∞ X

(n + 1)Cn1 (x · y)

n=0

is the Dirac distribution.

10

120

BERNSTEIN-EBERT: ON WAVELETS

4.2 Bilinear Theory The concept of dilation and translation becomes more evident in bilinear theory. Definition 4.8. Let µ : [0, ∞) → R+ be a positive weight function and {Ψρ , ρ ∈ (0, ∞)}, be a subfamily of L21 [−1, 1] such that the following admissibility conditions are satisfied: • for n = m, m + 1, . . . Z



ˆ ρ (n) µ(ρ) dρ = (n + 1)2 , Ψ

(9)

0

• for n = 0, 1, . . . , m,

and allρ ∈ (0, ∞) ˆ ρ (n) = 0, Ψ

• for all R ∈ (0, ∞) Z

1

−1

Z



R

√ (Ψρ ∗ Ψρ )(t) µ(ρ) dρ 1 − t2 dt ≤ T,

(10)

(11)

where T is a positive constant independent of R. Then {Ψρ } is called a spherical wavelet of order m. The function Ψ = Ψ1 is called a spherical mother wavelet. The associated (linear) wavelet transform is defined by Z 1 (W T )(F )(ρ, y) := 2 Ψρ (x · y)F (x) dσ(x) 2π S 3 for all F ∈ L2 (S 3 ). Then Theorem 4.9. (Reconstruction formula). Let {Ψρ } be a wavelet (of order m). The wavelet transform is invertible on the range of all functions F ∈ L2 (S 3 ) with Fˆ (k, i) = 0 for k = 0, 1, . . . , m and i = 1, . . . , (k + 1)2 , in L2 -sense by Z Z ∞ 1 (W T )(ρ, y)Ψρ (x · y) µ(ρ) dρ dσ(x). F (y) = 2 2π S 3 0

11

BERNSTEIN-EBERT: ON WAVELETS

121

Proof: Let R be a positive number. Then Z Z ∞ 1 (W T )(ρ, y)Ψρ (x · y) µ(ρ) dρ dσ(x) 2π 2 S 3 0 Z Z ∞ Z 1 1 Ψρ (x · z)F (z)dσ(z)Ψρ (x · y) µ(ρ) dρ dσ(x) = 2 2π S 3 0 2π 2 S 3 Z Z ∞ Z 1 1 = 2 Ψρ (x · z)Ψρ (x · y) µ(ρ) dρ dσ(x) F (z) dσ(z) 2π S 3 0 2π 2 S 3 | {z } Θ(y · z) Z Z ∞X Z ∞ 1 ˆ 2 (k) = 2 Ψ Ck1 (x · z)Ck1 (x · y) µ(ρ) dρ dσ(x)F (z) dσ(z) 2π S 3 0 k=0 ρ S3 Z X Z ∞ ∞ 1 1 ˆ 2ρ m(ρ) dρCk1 (x · z)F (z) dσ(z) = 2 Ψ 2π S 3 k=0 k + 1 R = (ΘR ∗ F )(y). Hence

∞ X

1 Θ(y · z) = k+1 k=m

Z



ˆ 2 µ(ρ) dρC 1 (x · z) Ψ ρ k

R

and uniformly bounded due to (11), further because of (9) we have ˆ R (k) = k + 1, for all k = m + 1, m + 2, . . . . lim Θ R→0+

(12)

Hence by (11) and (12) {ΘR , R > 0} is the kernel of an approximate identity and thus lim (ΘR ∗ F )(y) = F (y)

R→0

in the L2 (S 3 )-norm.  We can rewrite the reconstruction formula as: Z ∞ (Ψρ ∗ Ψρ ∗ F )(.) µ(ρ) dρ. F = 0

In the bilinear theory reasonable to introduce a scaling-function. Definition 4.10. The corresponding scaling-function {ΦR , R > 0} for a family of wavelets {Ψρ , ρ > 0} of order m is defined by  k + 1, k = 0, 1, . . . , m,  ˆ R (k) := R 1 Φ ∞ ˆ2 2  Ψρ (k) µ(ρ) dρ , k = m + 1, . . . . R It can be shown that that the scaling function ΦR belongs to L2 (S 3 ) and that lim (ΦR ∗ ΦR ∗ F )(y) = lim (ΘR ∗ F )(y) = F (y).

R→0

R→0

For details see [9].

12

122

BERNSTEIN-EBERT: ON WAVELETS

5 Wavelets The wavelets of order m corresponding to the Gau-Weierstra kernel look as follows: ( 0, k = 0, 1, . . . , m, ˆR = 1 Ψ  2k(k + 1)2 (k + 2)e−2k(k+2)R µ−1 (R) 2 , k = m + 1, m + 2, . . . . Those corresponding to the Abel-Poisson kernel are ( 0, k = 0, 1, . . . , m, ˆR = 1 Ψ  2k(k + 1)2 e−2kR µ−1 (R) 2 , k = m + 1, m + 2, . . . . We plot Abel-Poisson wavelets of order 0 and choose µ(ρ) = ρ14 . It is not so simple to visualize wavelets on the 3D sphere, which is a subset of R4 . To overcome the dimensional problem we consider only the upper half sphere S+3

4

:= {(x1 , x2 , x3 , x4 ) ∈ R :

4 X

x2i = 1,

and x1 ≥ 0},

i=1

which can be identified with the unit ball in R3 by 4 X i=1

x2i = 1 ⇐⇒

4 X

x2i = 1 − x21 ≤ 1

i=2

√ For a fixed x we obtain a 2D sphere in R3 with radius 1 − x2 . The union of all spheres with 0 ≤ x1 ≤ 1is the unit ball in R3 . Abel-Poisson wavelet with ρ = 0.2 on the slices x = 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 :

13

BERNSTEIN-EBERT: ON WAVELETS

123

It is easily seen that the wavelets localization is good, small positive and negative values are due to the fact that ρ = 0.2. In the next figures the influence of ρ is demonstrated by the slice x1 = 0 for varying values of ρ. The delta peak is very good for small ρ and expands for larger values of ρ. AbelPoisson wavelet with ρ = 0...3 on the slice x = 0

References [1] J.-P. Antoine and P. Vandergheynst, Wavelets on the 2-sphere: A group-theoretical approach, Appl. Comput. Harmon. Anal. 7, 262–291, (1999). [2] J.-P. Antoine and P. Vandergheynst, Wavelets on the n-sphere and other manifolds, J. Math. Physics 39, 3987–4008, (1998). [3] J.-P. Antoine, L. Demanet, L. Jacques and P. Vandergheynst, Wavelets on the Sphere:

14

124

BERNSTEIN-EBERT: ON WAVELETS

Implementation and Approximations, Appl. Comput. Harmon. Anal. 13, No.3, 177-200 (2002). [4] H. Berens, P. L. Butzer and S. Pawelke, Limitierungsverfahren von Reihen mehrdimensionaler Kugelfunktionen und deren Saturierungsverfahren, Publ. of the Research Institute for Mathematical Sciences, Kyoto Univ. Ser. A, 4, 1968, pp. 201–268. [5] A. Bonami and J.-L. Clerc, Sommes de Ces` aro et multiplicateurs des d´eveloppements en harmoniques sph´eriques, Trans. Amer. Math. Soc., 138, (1973), 223-263. [6] A. P. Calder´on and A. Zymund, On a problem of Mihlin, Trans. Amer. Math. Soc. 78, 209–224 (1955). [7] C. F. Dunkl, Operators and harmonic analysis on the sphere, Trans. Amer. Math. Soc. 125, 250–263 (1966). [8] W. Freeden, T. Gervens and M. Schreiner, Constructive Approximation on the Sphere with Applications to Geomathematics, Numerical Mathematics and Scientific Computation, Oxford Scienes Publ., Clarendon Press, Oxford, 1998. [9] Ebert, S., Wavelets on the three-dimensional sphere, Diplomarbeit (diploma thesis), TU Bergakademie Freiberg, 2008. [10] H. Kalf, On the Expansion of a Function in Terms of Spherical Harmonics in Arbitrary Dimensions, Bull. Belg. Math. Soc. 2 (1995), 361-380. [11] S. G. Mikhlin and S. Pr¨oßdorf, Singular Integral Operators, Springer-Verlag Berlin, 1986. [12] K. M¨ uller, Analysis of Spherical Symmetries in Euclidean Spaces, Appl. Math. Sciences 129, Springer-Verlag New York, 1998. [13] N. R. Wallach, Harmonic analysis on homogeneous spaces, Marcel Dekker Inc., New York, Pure and Applied Mathematics, No. 19, 1973. [14] J. Wloka, Partielle Differentialgleichungen, BSB B.G. Teubner Verlagsgesellschaft, Leipzig, 1982.

15

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 125-149,2010, COPYRIGHT 2010 EUDOXUS PRESS, 125LLC

A Taste of Ideal Projectors Boris Shekhtman Department of Mathematics and Statistics University of South Florida, Tampa, FL 33620 [email protected] http://shell.cas.usf.edu/~boris/ March 16, 2009 Abstract We survey the properties of ideal projectors and structure of the family of ideal projectors onto a given …nite-dimensional space of polynomials. Particularly we establish relations between ideal projectors, commuting matrices, zero-dimensional ideals, solutions of systems of PDEs and certain topics in algebraic geometry, such as Hilbert and border schemes.

1

Menu

As the title indicates, this survey does not o¤er a full course on ideal interpolation but rather, following the culinary analogy, a sample of what is available in this exiting area of multivariate interpolation. Although many of the results presented here are true for the real …eld as well as the complex …eld, I will limit myself to working in the space C[x] := C[x1 ; : : : ; xd ] of polynomials in d variables with complex coe¢ cients. De…nition 1.1 ([Bi]) A linear idempotent operator P : C[x] ! C[x] is called an ideal projector if ker P is an ideal in C[x]. Lagrange interpolation projectors, Taylor projectors and, in one variable, Hermite interpolation projectors are all examples of ideal projectors. For this reason the ideal interpolation holds a promise of elegant theory of multivariate extensions of univariate properties, which as a rule tend to be a messy subject. The brilliance of Birkho¤’s idea is in restricting the domain of the projectors to the space (ring, algebra) of polynomials C[x] thus allowing a whole slue of various mathematics to come into play. As luck would have it, the problems in ideal interpolations are closely related to problems in commutative and linear algebra, algebraic geometry and PDEs. Here is what on the menu:

1

126

SHEKHTMAN: IDEAL PROJECTORS

1.1

Approximation Theory (AT)

The main objective of this paper is to study ideal projector onto a …xed N dimensional subspace G C[x]. We will denote the family of all such projectors by PG . By the end of this paper we will get to a remarkable fact: the structure (geometric, metric, topological) of PG depends on G and di¤ers for di¤erent spaces G of the same dimension.

1.2

Commutative Algebra (CA)

Every ideal projector P 2 PG determines a decomposition C[x] = J

G

(1.1)

with the ideal J = ker P . Thus studying PG is equivalent to studying the set JG of ideals in C[x] that complement G. Given an ideal J 2 JG , the space G spans the …nite-dimensional quotient algebra C[x]=J.

1.3

Algebraic Geometry (AG)

Every ideal J 2 C[x] generates a subset Z(J) = fz 2 Cd : f (z) = 0 for all f 2 Jg

(1.2)

which is called an a¢ ne algebraic set (variety, scheme). Given a basis g = (g1 ; ; gN ) of G we will construct an a¢ ne algebraic set, called the border scheme Bg that parametrizes PG (equivalently JG ) in a natural (continuous way).

1.4

Linear Algebra (LA)

A sequence of L = (L1 ; : : : ; Ld ) of commuting linear operators on G is called cyclic if there exists a vector g0 2 G such that ff (L1 ; : : : ; Ld )g0 : f 2 C[x]g = G

(1.3)

The vector g0 is called a cyclic vector for L. Let LG stand for the family of all such sequences. With every P 2 PG (J 2 JG ) we will associate an L 2LG and vise versa.

1.5

Duality (PDE)

Let g = (g1 ; : : : ; gN ) be the linear basis for G C[x]. Every …nite-dimensional projector P on C[x], ideal or not, can be written as X P = gk Fk (1.4)

where (Fk ; k = 1 : N ) consists of functionals in C0 [x] dual to (gk ; k = 1 : N ), i.e., hF; gj i = j;k . The space 2

SHEKHTMAN: IDEAL PROJECTORS

span fFk g = ran P = (ker P )?

127

(1.5)

is correct for G, thus for every F 2 ran P we have F (f ) = F (P f ) for all f 2 C[x]. In other words P “interpolates” the functionals F 2 (ker P )? . We will identify the functionals in C0 [x] with formal power series in C[[x]] and show that the ideal projectors correspond exactly to D-invariant subspaces of C[[x]]. These subspaces are precisely the spaces of solution of homogeneous systems of PDEs with constant coe¢ cients, hence the acronym.

1.6

Putting it all together

In this paper we will explain interrelationship between the notions mentioned above, present result and some open problems that are based on intricate interplay between these diverse …elds. To keep the size of the paper reasonable I had to make a choice between rigorous proofs and accessibility of the material to a reader not terribly familiar with algebraic geometry (such as myself). I chose the latter. Thus many of the proofs are going to be only highlighted (with detailed references to the original papers). Instead I will attempt to illustrate the results with generous serving of concrete examples. Bon appetit!

2 2.1

Antipasto de Boor’s equation

An obvious property of multiplication on C[x]=J: [f [g]] = [f g]

(2.1)

translates into the following characterization of ideal projectors : Theorem 2.1 (Carl de Boor [Bo1])A linear operator P : C[x] ! C[x] is an ideal projector if and only if P (f g) = P (f P (g))

(2.2)

for all f; g 2 C[x]. This theorem implies that unlike an arbitrary projector onto G, an ideal projector is completely determined by its values on a small set of polynomials. This makes sense. Imagine that you know that P is a Lagrange interpolation projector onto the space of polynomial of degree less than N in one variable. Knowing P xN we can form a polynomial xN P xN which has exactly N zeroes. These zeroes are the interpolation nodes (sites) for P , thus P xN determines P . In algebraic term the ideal xN P xN generated by xN P xN is the kernel of P .

3

128

SHEKHTMAN: IDEAL PROJECTORS

2.2

The border bases

Here is the general situation: Let g = (g1 ; : : : ; gN ) be a linear basis for G. We de…ne the border of g to be @g := f1; xi gk ; i = 1; : : : ; d; k = 1; : : : ; N gnG.

(2.3)

For every J 2 IG , the decomposition (1.1) induces an ideal projector PJ onto G with ker PJ = J. From (1.1) it follows that for every ideal J 2 IG and for every b 2 @g there exists a unique (!) polynomial pb = PJ b 2 G such that b pb 2 J. As it turns out, the set fb pb ; b 2 @gg forms an ideal basis for J, called a (generalized) border basis. Proposition 2.2 Let J 2 IG and for every b 2 @g let pb := PJ b be the unique polynomial in G such that b pb 2 J. Then (i)fb pb ; b 2X @gg forms an ideal basis for J. (ii) If P f = a (f )x 2 G then the coe¢ cients a (f ) are polynomials 2Zd +

in the coe¢ cients of polynomials fP b; b 2 @gg. The simple proof of (i) can be found in [Bo1] and equally simple proof for (ii) is in [S5]. What about a converse? That is, what polynomials (pb ; b 2 @g) G have the property that the ideal hb pb ; b 2 @gi is in IG ? This is the question …rst dealt with in [Mo] with some additional assumptions on G (cf. also [Bo3] and [KR, 6.4B], ). The extension of their results is presented below. Mimicking the terminology of [KR, 6.4B], we will characterize those border prebases that are border bases. We will present necessary and su¢ cient conditions on polynomials fpb ; b 2 @gg for fb pb ; b 2 @gg to be a basis for an ideal in IG . As in [KR, 6.4B], the criterion involves formal multiplication operators Mj : G ! G de…ned by Mi gk =

xi gk pxi gk

if if

xi gk 2 G xi gk 2 =G

(2.4)

Here is the main theorem of this section: Theorem 2.3 Let (pb ; b 2 @g) be a sequence of polynomials in G. Then the ideal hf pf ; f 2 @gi 2 IG if and only if (i) Mi Mk = Mk Mi for all i; k = 1; : : : ; d, (ii) g(M1 ; : : : ; Md )p1 = g for all g 2 G. Proof. First assume that J = hb pb ; f 2 @gi 2 IG and let PJ be the ideal projector onto G with ker PJ = J. Then Mi g = PJ (xi g) for all i = 1; : : : ; d. It follows from (2.2) that Mj Mk (g) = PJ (xi PJ (xk g)) = PJ (xi xk g) = PJ (xk xi g) = Mk Mi (g)

4

SHEKHTMAN: IDEAL PROJECTORS

P which proves (i). Also observe that if g = a x , then, for M := (M1 ; : : : ; Md ) and g0 := P 1, we have X X g(M1 ; : : : ; Md )g0 = g = a M (PJ 1) = a PJ (x (PJ 1)) X X = a PJ (x ) = PJ ( a x ) = PJ g = g

which proves (ii). Now, suppose that (i) and (ii) holds. Then the mapping ' : C[x] ! C[x] de…ned by 'f = f (M)p1 is a ring homomorphism, hence it kernel by (ii)

K := ker ' = ff 2 C[x] : f (M)p1 = 0g = ff 2 C[x] : f (M) = 0g is an ideal in C[x]. By (ii) the range of ' is G and K \ G = 0. By the fundamental theorem of homomorphisms C[x]=K is isomorphic to G. In particular codimension of K is equal to dim G and K complements G. Let hb be the unique element in G such that b hb 2 K. We need to show that J = K or, alternatively that hb = gb for every b 2 @g. Since b hb 2 K we have by (ii) 0 = (b(M) hb (M))p1 = b(M)p1 hb On the other hand, by de…nition of M, we have b(M)p1 = pb which implies that pb = hb for all b 2 @g. Remark 2.4 If G is a D-invariant subspaces of |[x] spanned by monomials, then 1 2 G and, by the D-invariance, the condition (ii) of the Theorem 2.3 is automatically satis…ed (see example section 3). Hence the theorem 2.3 generalizes theorem 6.4.30 of [KR] with, what seems to be, a shorter, simpler proof, courtesy of the language of ideal projectors.

2.3

The border scheme

The operators M1 ; : : : ; Mk can be written as N N matrices in the basis g and, the polynomial p1 2 G generates an N 1 matrix of its coe¢ cients. De…nition 2.5 The a¢ ne scheme Bg de…ned by the ideal Ig generated by the entries of the matrices Mj Mi Mi Mj ; i; j = 1; : : : ; d and the coordinates of the vector p1 : hgk (M1 ; : : : ; Md )p1 gk i ; k = 1; : : : ; N (2.5)

is called the generalized border scheme for g or g-border scheme. It parametrizes the family of ideals IG or, equivalently, the family of ideal projectors PG . Proposition 2.6 It is clear from construction ideal projector P 2 PG (ideals J 2 IG ) are in one-to-one correspondence with the points in Bg . Thus we will sometimes refer to P 2 PG (J 2 IG ) as a point P 2 PG . 5

129

130

SHEKHTMAN: IDEAL PROJECTORS

3

Aperitif

In this section we will illustrate how the notions of the previous sections apply for concrete spaces G. About the easiest multivariate space one can …nd, is the space of linear function in C[x; y]. As you will see, even in this case the computations are quite involved. Nevertheless, they are worth going through. Once accustomed to it, this example acts like a Led Zeppelin record: when played backwards, it sends you messages about the general theory. (For the univariate theory see [S1]). We will now attempt to determine all ideal projectors onto the three dimensional subspace G C[x; y] spanned by its basis g = (1; x; y): By the theorem 2.3, to describe ideal projectors P onto G we only need know the values of P on the set of monomials x2 ; xy and y 2 . In other words the polynomials x2 P x2 , xy P (xy) and y 2 P y 2 will form an ideal basis for the ideal ker P . Assume that P is an ideal projector onto G and P x2 = a0 + b0 x + c0 y; P xy = a1 + b1 x + c1 y; P y 2 = a2 + b2 x + c2 y:

(3.1)

We need to …nd conditions on nine coe¢ cients (a0 ; a1 ; a2 ; b0 ; b1 ; b2 ; c0 ; c1 ; c2 ) that guarantee that the ideal x2 complements G? To answer this question 2 0 M1 = 4 1 0

P x2 ; xy

P xy; y 2

P y2

we form formal multiplication matrices 3 2 3 0 a1 a2 a0 a1 b 0 b 1 5 ; M2 = 4 1 b 1 b 2 5 c0 c1 0 c1 c2

(3.2)

and use Theorem 2.3. For our choice of G, the conditions (ii) of the theorem is automatically satis…ed. All that is left is to enforce the commutativity. The six quadratic equations obtained from M1 M2 M2 M1 = 0 are 8 (a0 b1 + a1 c1 ) (a1 b0 + a2 c0 ) = 0; > > > > (a (b0 b1 + b2 c0 ) = 0; > 1 + b0 b1 + b1 c1 ) > < c21 + b1 c0 (a0 + b0 c1 + c0 c2 ) = 0; (a0 b2 + a1 c2 ) (a1 b1 + a2 c1 ) = 0; > > > > (a b21 + b2 c1 = 0; > 2 + b0 b2 + b1 c2 ) > : (b2 c0 + c1 c2 ) (a1 + b1 c1 + c1 c2 ) = 0:

A close examination reveals that there are a lot of redundancy among these equation. The solutions to these equations are given by a0 = b0 c1 + c21 + b1 c0 c0 c2 ; a1 = b2 c0 b1 c1 ; a2 = b21 c2 b1 b0 b2 + b2 c1 : 6

(3.3)

SHEKHTMAN: IDEAL PROJECTORS

131

The border scheme Bg is a six-dimensional variety a¢ ne variety in C9 that consists of all nine-tuples (a0 ; a1 ; a2 ; b0 ; b1 ; b2 ; c0 ; c1 ; c2 )

(3.4)

satisfying (3.3). By checking (3.3) we see that the following four projectors de…ne by T : T x2 = T (xy) = T y 2 = 0, P : P x2 = y; P (xy) = P y 2 = 0, L : Lx2 = x; L(xy) = 0; Ly 2 = y, H : Hx2 = Hxy = Hy 2 = y

(3.5)

are in fact ideal projectors onto G. The …rst, T is the Taylor projector onto G, it interpolates f 2 |[x; y] and its …rst partial derivatives at 0. The dual space (ker P )? is a D-invariant subspace of |[[x; y]] spanned by 1; x; x2 + 2y. Thus P also interpolates at zero various derivatives at zero, namely 0; 0

Dx ;

0

(Dx2 + 2Dy )

and is a di¤erent projector. Hence, unlike the case in one variable there are two (in…nitely many) ideal projectors onto G such that Z(ker P ) = f0g. The projector L is a Lagrange projector interpolating at sites (0; 0); (1; 0) and (0; 1). The dual space (ker L)? is a span of the power series expansion of three distinct exponential functions: 1 = e0 ; ex and ey . Finally the last projector H interpolates at zero, at (1; 1) and the derivative Dx at zero. (ker H)? = spanfe0 ; xe0 ; ex+y g.

4

Soup and salad

4.1

Cyclic commuting matrices

Given P 2 PG and i = 1 : d, we de…ne multiplication operators Mi : G ! G by Mi (g) = P (xi g).

(4.1)

These operators are similar (literally and …guratively) to the multiplication maps mj on |[x]=J de…ned by mj ([f ]) := [xj f ] 2 |[x]=J for every [f ] 2 |[x]=J. A relationship between ideals, multiplication maps and numerical analysis was initiated and explored by H. Stetter [St]. Observe that these operators are precisely the operators de…ned in (2.4) with pxi gk = P (xi gk ). Therefore the sequence MP := (M1 ; : : : ; Md ) is a cyclic commuting sequence with the cyclic vector g0 := P (1). Nearly all the information about P can be read o¤ the sequence MP : Proposition 4.1 Let ' : C[x] ! C[x] be de…ned by '(f ) = f (MP )g0 2 C[x]. Then (i) ker P = ker ' = ff 2 C[x] : f (MP )g0 = 0g = ff 2 C[x] : f (MP ) = 0g. (ii) The restriction 'jG of ' to G is an isomorphism on G. 1

(iii) P = 'jG

' 7

132

SHEKHTMAN: IDEAL PROJECTORS

The converse to this is also true (cf. [BS]): Theorem 4.2 Let L := (L1 ; : : : ; Ld ) be a cyclic sequence of commuting N N matrices with a cyclic vector v0 2 CN . Let ' : C[x] ! C[x] be de…ned by '(f ) = f (L)v0 2 CN , let JL := ff 2 C[x] : f (MP ) = 0g

(4.2)

Then and L is similar to the matrices of multiplication operators MP of any ideal projector P with ker P = JL .

4.2

Duality and PDEs

The space C[[x]] C[x] is the space of all formal power series in d variables. A generic element f 2 C[[x]] is written as a formal sum X f (x) = f^( )x (4.3)

where = ( 1 ; : : : d ) runs through all multiindices in Zd+ and x = x1 1 : : : xd d . Given f 2 C[[x]] and a sequence L = (L1 ; : : : ; Ld ) of commuting operators on some linear space V , we de…ne a formal operator f (L) on V to be X f (L)v = f^( )L1 1 : : : Ld d v. In particular if Di are operators of di¤erentiation on V = C[x] with respect to the variable xi and D := (D1 ; : : : ; Dd ) then for every F 2 C[[x]] and every f 2 C[x] the pairing hF; f i := (F (D)f )(0) =

X

_

!F^ ( )f^( )

de…nes an isomorphism between C[[x]] and the algebraic dual C0 [x] of C[x]. It is easy to see that hF; xi f i = hDi F; f i , (4.4) in other words Di on C[[x]] is an adjoint of the operator of multiplication by xi on C[x]. Example: It is easy (and insightful) to check that the point evaluation functional z : z (f ) = f (z) on C[x] corresponds to the functional F 2 C[[x]] de…ned by the expansion of ez x in powers of x. The following theorem due to Macauley [Ma] follows immediately from (4.4): Theorem 4.3 A subspace J C[x] is an ideal i¤ J ? i.e., F 2 J ? implies Di F 2 J ? .

C[[x]] is D-invariant,

Thus the study of JG (hence JG ) is equivalent to the study of D-invariant subspaces C[[x]] that are correct for G. Example: Keeping in mind the example above, we conclude that the Lagrange interpolation projectors are determined by subspaces of C[[x]] which are 8

SHEKHTMAN: IDEAL PROJECTORS

133

spanned by pure exponentials: = spanfezj x :j = 1 : N g while the Taylor projector is determined by a span of pure polynomials: = spanfx : j j ng =: C[x] n . As the multiplication matrices are associated with the ideal projector, the adjoints of those matrices determine the rules of di¤erentiating functions in (ker P )? . Let J 2 IG . For every F 2 J ? we denote the restriction of F to G by F . Since G complements J, it follows that J ? is an N -dimensional subspace over G, hence the dual space G is (J ? ) := fF ; F 2 J ? g.

(4.5)

If PJ is the ideal projector onto G with ker P = J, then X gj Fj PJ =

(4.6)

where g = (g1 ; : : : ; gN ) is a basis for G, and (F1 ; : : : ; FN ) is the dual basis in J ? , i.e., P Fj (gk ) = j;k . The adjoint operator P : C[[x]] ! C[[x]] is a projector PJ = Fj gj having J ? as its range. Theorem 4.4 Let J 2 IG and let MJ = (M1 ; : : : ; Md ) be the matrices of multiplication operators: Mi (g) = PJ (xi g)

in the basis g = (g1 ; : : : ; gN ). Let (F1 ; : : : ; FN ) be the dual basis in J ? , i.e., Fj (gk ) = j;k . Then 2 3 2 3 F1 F1 6 7 6 7 Di 4 ... 5 = Mi 4 ... 5 (4.7) FN

FN

Proof. For every g 2 G we have F (Mi g) = F (Mi g) = (Mit F )(g). On the other hand F (Mi g) = F (P (xi g)) = (P F )(xi g) = F (xi g) = (Di F )(g) = (Di F ) (g), where the third equality follows F 2 J ? =ran P . Hence (Mit F ) = (Di F ) . P (i) (i) This means that (Di Fk ) = mj;k Fj , where mj;k is the j; k-entry in the P (i) matrix Mi . On the other hand, by the @-invariance, Di Fk = aj;k Fj for some (i)

(i)

(i)

coe¢ cients aj;k . Since (Fj ) is a basis in G it follows that aj;k = mj;k .

4.3

Primary ideals

An ideal J or g 2 J.

C[x] is called primary if f g 2 J implies f m 2 J for some m 2 N

Theorem 4.5 Let J be a zero-dimensional ideal in C[x]. Then the following are equivalent: (CA) J is primary 9

134

SHEKHTMAN: IDEAL PROJECTORS

(G) Z(J) = fzg, i.e., Z(J) consists of one point (PDE) J ? = ez x M where M C[x] is a d-invariant subspace of polynomials (!). (LA) (ML ) = fzg. The equivalence of (CA) and (G) is standard (cf. [CLO1], [BR]). The (PDE) was explored in [M] (cf.also [Bo1]). The (LA) follows from [MSh].

4.4

Primary decomposition

De…nition 4.6 Let L := (L1 ; :::; Ld ) be a d-tuple of operators on V . A direct sum decomposition V = V1 V2 ::: Vt (4.8) is L-invariant if each subspace Vk , k = 1; :::; t is an invariant subspace for each of the operators Lj ; j = 1; :::; d. Letting Lj;k := Lj jVk denote the restriction of Lj onto Vk we write Lk = L jVk := (L1;k ; :::; Ld;k ).

(4.9)

The simultaneous block-diagonalization of L into t blocks amounts to nothing more then the L-invariant decomposition (4.8) of V : Indeed, for an appro~ j of Lj can be written in a block-diagonal priately chosen bases, the matrix L form 2 ~ 3 Lj;1 0 0 ~ j;2 6 0 L 0 7 7 ~j = 6 L (4.10) 6 . . .. 7 , . .. .. 4 .. . 5 ~ j;t 0 0 L A d-tuple := ( 1 : : : d ) 2 Cd is called an eigentuple of L if there exists a non-zero vector v 2 such that Lj v = j v for all j = 1 : d. The set of all eigentuples of L is called the joint spectrum of L and is denoted by (L). The next proposition seems to be well-known among experts. For a cute proof we refer to [MSh]: Proposition 4.7 Let L be a d-tuple of pairwise commuting operators on V . Then (i) (L) = Z(JL ). (ii) There exists an L-invariant decomposition of V : V = V1

V2

:::

V#

(L) .

(4.11)

Adding the assumption that L is cyclic gives us more: Theorem 4.8 (cf. [MSh]) If L is cyclic then (4.11) is a unique maximal Linvariant decomposition of V : V =

2 (L) V

10

,

SHEKHTMAN: IDEAL PROJECTORS

135

i.e., the space V cannot be decomposed into more than # (L) of L-invariant subspaces. More over (LjV ) = f g, hence consists of a single eigentuple. The reason is that if L is cyclic then eigenvectors for L in di¤erent Vk and Vj correspond to di¤erent eigentuples. Theorem 4.9 Let J be a zero-dimensional ideal with Z(J) = fz1 ; : : : zm g. Then (LA) There exists unique (up to the order) maximal MJ -invariant decomposition G = G1 G2 ::: Gm (4.12) and (MJjGj ) = fzj g. (AT) The ideal projector PJ has a unique maximal decomposition as a sum of m ideal projectors: P = P1 + : : : Pm (4.13) where each Pj is an ideal projector onto Gj interpolating at exactly one point and Pk Pj = k;j Pj . (CA: Lasker-Noether) There is unique (minimal with respect to containment) primary decomposition J = \Jj (4.14) with each Jj is a primary ideal with Z(Jj ) = fzj g. (PDE) The subspace J ? C[[x]] has a unique maximal decomposition J ? = ez1 x H1

:::

ezm x Hm

(4.15)

where each Hj is a D-invariant subspace of polynomials. The (CA) is the famous Lasker-Noether theorem, (PDE) was observed by [M] (cf. also [Bo1], [BR]) and follows from (CA) since (4.14) implies J ? = Jj? . (LA) and (AT) are from [MSh].

5 5.1

Appetizers Radical ideals

An ideal J

C[x] is called a radical ideal if f m 2 J implies f 2 J.

Theorem 5.1 Let J C[x] be a zero-dimensional ideal. Than the following are equivalent: (A): J is a radical ideal. (G): #Z(J) =codimJ (PDE): J ? is a linear span of pure exponentials: J ? = spanfezk x ; k = 1 : N g. (LA): The matrices (M1 : : : Md ) = MJ are simultaneously diagonalizable. Moreover, the diagonal elements consist of interpolation sites for the ideal projector PJ . (AT): PJ is a Lagrange projector interpolating at sites fz1 : : : zN g 11

136

SHEKHTMAN: IDEAL PROJECTORS

Proof. Again, equivalence of (A) and (G) is standard (cf. [CLO1],). (PDE) follows from Theorem 4.5. (LA) was …rst observed in [St], cf. also [Bo1] and [BS]; but now follows from Theorem 4.8. Example: The projector L from (3.5) is a Lagrange projector interpolating at sites (0; 0); (1; 0) and (0; 1). The multiplication matrices for this projector are 02 3 2 31 02 3 2 31 0 0 0 0 0 0 0 0 0 0 0 0 ML = @4 1 1 0 5 ; 4 0 0 0 5A @4 0 1 0 5 ; 4 0 0 0 5A 0 0 0 1 0 1 0 0 0 0 0 1 that is the matrices of ML are diagonalizable. J ? = spanf1 = e0 ; ex ; ey g which is the span of pure exponential.

Remark 5.2 The equivalence of (A) and (LA) of the theorem was …rst observed by Hans Stetter [St]. He also noticed that the joint spectrum (diagonal pairs) of ML are precisely the interpolation sites, which also follows from the Theorem 4.8. More over the eigenvectors corresponding to these eigentuple are basic Lagrange polynomials that vanish on all points but one.

5.2

Curvilinear ideals

An ideal J is called curvilinear if there exists a linear form X =

d X

j xj

such

j=1

that J complements the space spanf1; X; :::; X N

1

g.

Theorem 5.3 For an ideal J 2 JG the following are equivalent: (AT): J is curvilinear d X (LA): The matrix M := j Mj is non-derogatory. j=1

(CA) Every ideal Jk in the primary decomposition J = \sk=1 Jk is curvilinear. #Z(J) (PDE): If J ? = k=1 (ezk x Hk ) then each Hk contains at most one linear polynomial. Proof. Let GX := spanf1; X; : : : X N 1 g and letP Q be an ideal projector onto GX with MQ = (L1 ; : : : ; Ld ) and de…ne LX := P ai Li . By the Theorem 2.3, 1 (1; LX 1; : : : ; LN 1) spans the space GX , henceP ai Li is non-derogatory. By X the theorem 2.7, MQ is similar to MP hence ai Mi is similar to LX hence non-derogatory. This proves (LA). The converse: (LA))(AT) immediately (k) follows from Proposition 4.1. To Pprove (CA), P let MJ = diag(MJ ) and Mi = diag(Mi;k ) P as in (4.10). Then ai Mi = ai diag(Mi;k ) is non-derogatory if and only if i ai Mi;k is non-derogatory for each k. Finally, to prove the equivalence of (PDE) to the rest of the statements of the theorem, observe that, by (CA) it is enough to prove this for primary ideals J with V(J) = f0g, i.e., with J ? C[x]. Without loss of generality, assume 12

SHEKHTMAN: IDEAL PROJECTORS

137

1 that J 2 IG with G = spanf1; x1 ; : : : ; xN g. This means that the dual basis 1 ? for J is of the form fxk1 + fk ; k = 0; : : : ; N 1g (5.1)

1 where fk do not contain the monomials f1; x1 ; : : : ; xN g. In particular, it (5.1) 1 contains at most one linear form: x1 + f1 . Conversely, assume that x1 is the only linear form in J ? and J is not curvilinear. Then J ? could not have a basis of the form (??) hence no polynomial in J ? can contain a monomial xm N 1, for otherwise its consecutive 1 for m derivatives would produce a basis of the form (3.5). Since J ? is N -dimensional, it must contain a polynomial F that has no pure powers of x1 in it. But that means that an appropriate derivative of F will give a linear form xk with k 6= 1. Indeed let u = x1 1 : : : xk k : : : xd d be a monomial of the highest degree in F such that k 1 for some k > 1. Then

xk = (D1 1 : : : Dk k

1

: : : xd d )u = (D1 1 : : : Dk k

1

: : : xd d )F 2 J ?

which gives the desired contradiction. Corollary 5.4 Every radical ideal projector is curvilinear. Converse is not true. P Proof. Diagonalizing taking a generic linear combination ai Mi we P MJ and conclude that S( ai Mi )SP1 has distinct elements on the diagonal, hence is non-derogatory. Therefore ai Mi is non-derogatory and J is curvilinear. The second part of the statement follows from the example below. Example: The kernels of projectors P ; L and H are curvilinear ideals. In particular, looking at the duals we see that (ker P )? contains precisely one linear term while none of the Hj for (ker L)? contain linear terms since (ker L)? is a combination of pure exponentials. The ideal (ker P )? is not radical, yet curvilinear. It complements the spanf1; y; y 2 g. The (ker H)? has one summand with the linear term and one without. Both ideals (ker H)? and (ker L)? complement the space, say, f1; x + y; (x + y)2 g. The kernel of the Taylor projector ker T is not curvilinear since (ker T )? contains two linear forms. Remark 5.5 The equivalence of (AT) and (LA) was observed in [BS] and [Co]. I learned the equivalence of (AT) and (PDE) from [Co]. The proof presented here is quite di¤ erent, and in my opinion, much simpler then the one in [Co].

5.3

Gorenstein Ideals

De…nition 5.6 A zero-dimensional ideal J 2 JG is called Gorenstein if there exists one function F 2 C[[x]] such that D(F ) = J ? . Gorenstein ideals form an important class of ideal in commutative algebra (cf. [B]).

13

138

SHEKHTMAN: IDEAL PROJECTORS

Theorem 5.7 Let J 2 JG be a zero-dimensional ideal. The following are equivalent: (PDE) J is Gorenstein. (LA1) The sequence of adjoints MJ = (M1 ; : : : ; Md ) is cyclic. (LA2) MJ is similar to MJ (CA) Every ideal Jk in the primary decomposition J = \sk=1 Jk is Gorenstein. Proof. M is cyclic if and only if there exists an F 2 J ? such that fp(M )F ; p 2 C[x]g = G . This is equivalent to f(p(D)F ) ; p 2 C[x]g = G which, in turn, is equivalent to D(F ) = f(p(D)F ); p 2 C[x]g = J ? . To prove (LA2) observe the ideals generated by MJ and MJ by (4.2) are equal. Thus, by the theorem 4.2 the sequence MJ is similar to MJ . Example 5.8 are 02 0 MT = @4 1 0

The multiplication operators of the Taylor projector T from (3.5) 0 0 0

3 2 0 0 0 5;4 0 1 0

0 0 0

31 02 0 0 0 5A ; MT = @4 0 0 0

1 0 0

3 2 0 0 0 5;4 0 0 0

0 0 0

31 1 0 5A 0

and MT is not a cyclic sequence, as was noted in [BS]. The reason for this is that (ker T ) = spanf1; x; yg is not a de‡ation of a single polynomial. On the other hand the projector P , being curvilinear is Gorenstein 3 2 31 02 3 2 31 02 0 0 0 0 1 0 0 0 1 0 0 0 MP = @4 1 0 0 5 ; 4 0 0 0 5A ; MP = @4 0 0 1 5 ; 4 0 0 0 5A 1 0 0 0 0 0 0 0 0 0 1 0

and MP is a cyclic sequence with cyclic vector (0; 0; 1). Observe that the dual (ker P )? are given by (1; x; 2y + x2 ) and D(2y + x2 ) = (ker P )? . Finally consider F = xy, then D(F ) = span f1; x; y; xyg and J = ff 2 C[x; y] : P (f ) = 0; P 2 D(F )g is Gorenstein, yet not curvilinear, since J polynomials: x and y of degree 1.

?

(5.2)

contains two linearly independent

Proposition 5.9 If J is curvilinear, then J is Gorenstein. P Proof. If J is curvilinear then there exists a linear combination M0 := aj Mj such that M0 is non-derogatory. This implies that M is non-derogatory, hence cyclic. Thus M = (Mj ) is cyclic and J is Gorenstein. Theorem 5.7 has an interesting and unexpected corollary to linear algebra: Every square matrix M is similar to its adjoint. This is not the case for sequences of commuting matrices as is seen in the example where MT is not cyclic. From equivalence of (LA1) and (LA2) of the theorem 5.7 we immediately obtain the following corollary: Theorem 5.10 A cyclic sequence L = (L1 ; : : : ; Ld ) is similar to its adjoint L = (L1 ; : : : ; Ld ) if and only if L is cyclic. 14

SHEKHTMAN: IDEAL PROJECTORS

5.4

139

Hermite projectors

De…nition 5.11 A projector P 2 PG is called Hermite if there exists a sequence of Lagrange projectors Pn such that Pn f ! P f

(5.3)

for every f 2 C[x]. An ideal J = ker P for Hermite projector P is called a Hermite ideal Proposition 5.12 Let J = \Jk be a primary decomposition of the ideal J 2 JG . Then J is Hermite if and only if each Jk is. I have a messy proof for this yet I feel that there should exist a short concise one that, so far, I failed to …nd. Every Lagrange projector is Hermite. In one variable every ideal projector P is Hermite since the polynomial h 2 C[x] that generates the ideal ker P can be approximated by polynomials hn of the same degree with distinct zeroes. The ideal projectors with ideals generated by hn are Lagrange.

6

Main course

My personal journey through the landscape of ideal projectors began with a question of Carl de Boor [Bo1]: Is every …nite-dimensional ideal projector Hermite? Ironically there where questions in other areas of mathematics that at the end turned out to be the same or almost the same questions. Here is the list: Problem 6.1 (AT, [Bo1]) Is every …nite-dimensional ideal projector Hermite? (AG, [Fo]) Is the border schemes Bg irreducible (LA, [Ge], [Gu]) Can every d-tuple of commuting matrices be approximated by commuting diagonalizable d-tuple of matrices (PDE, [Le]) Can a space of solutions of the homogeneous systems of PDEs with constant coe¢ cients be approximated by spaces of pure exponentials. With the hindsight of the previous chapters, you can guess that the answer to all these question is the same. It turns out to be: “Yes” in one and two variables and “No” in three or more variables. For (AT) and (PDE) this was done in [S2], for (AG) in [Fo] and [Ia] John Fogarty [Fo] proved a remarkable theorem, which translated from Hilbert schemes to border schemes says: Theorem 6.2 If g spans an N -dimensional subspace of C[x; y] then Bg is smooth, connected, irreducible a¢ ne variety of dimension 2N . Before we explain the words smooth and irreducible, let us lay the groundwork for asking questions. Are any of the statements in the theorem 6.2 true 15

140

SHEKHTMAN: IDEAL PROJECTORS

for border schemes in more than two variables? Do the answers depend on particular space G? Surprisingly the answer to the …rst question is “No” and to the second is “Yes”. Somehow, in little understood way, the structure of the border scheme depends strongly on the subspace G.

6.1

Topology

Next proposition says that various topologies on Bg are all the same: Theorem 6.3 Let Pn and P be in PG .The following are equivalent (AT) Pn f ! P f for every f (AG) Pn ! P as points in Bg (LA) The matrices MPn ! MP (PDE) For every F 2 (ker P )? there exist Fn 2 (ker Pn )? such that Fn (f ) ! F (f ) for every f 2 C[x]. Proof. The equivalence of (AT), (AG) and (LA) are immediate from Proposition 2.2 (ii) and constructions of MP . The equivalence of (PDE) is intuitively obvious. Since Pn and P are totally determined by their kernels and ranges and the ranges are the same, the kernels have to converge. The rigorous proof takes a little bit of doing (cf. [S3], [S4])

6.2

Irreducibility

A scheme (a¢ ne variety) Z called irreducible if Z is not a union of two proper subvarieties of Z. So what does the irreducibility of Bg has to do with Hermite projectors? Before answering this question, let us point out a few facts. In what follows we will freely identify the projectors in PG with points in Bg . Proposition 6.4 The set Lg := fP 2 Bg : P is Lagrangeg is a Zarisski open subset of Bg , i.e., the complement to Lg is a subvariety of Bg . In particular, Zarisski and Euclidean closure of Lg in Bg coincides and the dimension of this closure is dN . Proof. To prove that the complement of Lcg of Lg is Zarisski closed, we need to show that the points P 2 Lcg are characterized by being solutions of polynomial equations. This can be easily done (cf. [S5]). The coe¢ cients de…ning Lagrange projectors P 2 Lg are solutions to interpolation problems, hence, by Cramer’s rule can be expressed as ratios of determinants, that re polynomials in their entries (cf. [S5]). The entries depend on N interpolation points in Cd (dimension N d). The upshot is that Lg has a rational parametrization, hence the Zarisski closure of L~g of Lg is irreducible. Moreover, since Lg is Zarisski open in an irreducible variety L~g , it Zarisski closure coincides with its Euclidean closure (cf.[Mu]) which, by theorem 6.3, is the subvariety of Bg that consists of Hermite projectors. 16

SHEKHTMAN: IDEAL PROJECTORS

Let Hg Bg be the irreducible subvariety of Bg consisting of Hermite projectors. We can now answer the question posed at the beginning of this subsection: Theorem 6.5 ([S5]) Let g be any basis for G. Then Hg is irreducible subvariety of dimension dim Hg = dN and Bg is irreducible if and only if every P 2 PG is Hermite. In this case dim Bg = dN . Proof. If Hg 6= Bg then, since the complement Lcg of Lg is a subvariety of Bg , it follows that Bg = Hg [ Lcg and Bg is reducible.

6.3

Hermite and non-Hermite projectors

Here is an immediate corollary of Theorem 6.3: Proposition 6.6 Let P 2 PG . The following are equivalent: (AT) P is Hermite, i.e., can be approximated by Lagrange projectors (PDE) (ker P )? is the limit of subspaces consisting of pure exponentials (LA) MP can be approximated by a commuting sequence of diagonalizable matrices Corollary 6.7 If G Bg is irreducible.

C[x; y] then every P 2 PG is Hermite and the scheme

Proof. By Proposition 6.6, we only need to prove that every pair of commuting matrices can be approximated by diagonalizable matrices, since diagonalizable matrices correspond to Lagrange projectors. But this is a known fact from [MT]. For a nice proof of it see [BS] and [Gu]. Theorem 6.8 If d > 2 then for some G C[x1 ; : : : ; xd ] the scheme Bg is not irreducible, hence there are projectors P 2 PG that are not Hermite. Proof. (Essentially due to Iarrobino [Ia], cf. also [S5] and [S6]). Let U C[[x]] be the span of ru- y half of monomials of degree n and let V be the other half. Now let E C[[x]] be the space spanned by C[x] 0; β ∈ (−α, α) and its characteristic function is given by: ¶¾ ½ µq p 2 2 2 2 α − (β + iu) − α − β φN IG (u; α, β, δ) = exp −δ

Thus, Re (φX (u)) = c cos(d); Im (φX (u)) = c sin(d), where µ p ¶ ³¡ ´0.25 ¢ 2 2 2 2 2 2 2 2 c = exp δ α − β − δ α − β + u + 4u β cos (e) , d = −δ

´0.25 ³¡ ¢2 α2 − β 2 + u2 + 4u2 β 2 sin (e) ,

e = 0.5 arctan

α2

−2uβ . − β 2 + u2

Moreover, if we assume f (X) = X, the mean, the variance, the skewness and the kurtosis of Normal Inverse Gaussian distributions are given by: ¢−1/2 ¢−3/2 ¡ ¡ ; σf2 = α2 δ α2 −Ãβ 2 ; mf (α, β, δ) = δβ α2 − β 2 ! 3β α2 + 4β 2 p ; kf (α, β, δ) = 3 1 + sf (α, β, δ) = √ p . α δ · 4 α2 − β 2 δα2 α2 − β 2 ∂m (θ)

∂σ 2 (θ)

X (t)) X (t)) and ∂ Re(φ are given in Tables I The derivatives ∂θfk , ∂θf k ∂ Im(φ ∂θk ∂θk and III. The Carr, Geman, Madan, Yor (CGMY) distribution (see Carr et al. (2002)) A CGMY distribution depends on four parameters C, G, M > 0; Y < 2 and its characteristic function is given by:

¤ª © £ φCGMY (u; C, G, M, Y ) = exp CΓ(−Y ) (M − iu)Y − M Y + (G + iu)Y − GY . Thus, Re (φX (t)) = c cos(d); Im (φX (t)) = c sin(d), where ¶ ½ ∙ µ ¢Y /2 ¡ −t + c = exp CΓ(−Y ) −M Y − GY + M 2 + t2 cos Y arctan M ¶¸¾ µ ¡ ¢Y /2 t + G2 + t2 cos Y arctan G 11

CAVIEZEL ET AL: SEMIPARAMETRIC ESTIMATORS

161

Table III This table summarizes the derivatives used in the trigonometric estimators for NIG, and Stable distributions. Sα ( β ,σ , μ )

where

∂ Re 1 = − p ⎡ β π + 2q ln tσ + π q 2 sgn t sin m + 2ln tσ cos m ⎤ ⎦ ∂α 2 ⎣

p = tσ e

∂ Re = − pq sgn t sin m ∂β

q = tan

α ∂ Re − tσ = − te sin m ∂μ

m = t μ + β q tσ sgn(t )

(

)

α

− tσ

α

πα 2 α

∂ Re = −α pt sgn (tσ ) [ cos m + β q sgn t sin m ] ∂σ ∂ Im 1 ⎡ = p β π + 2q ln tσ + π q 2 sgn t cos m − 2ln tσ sin m ⎤ ⎦ ∂α 2 ⎣ ∂ Im = pq sgn t cos m ∂β

(

)

α ∂ Im − tσ = te cos m ∂μ

∂ Im = −α pt sgn ( tσ ) [sin m − β q sgn t cos m ] ∂σ

NIG ( α , β , δ )

where

∂ Re αδ w ⎡ = n β 2 − α 2 − t 2 cos q + m 3 cos (δ m sin p ) − 2tn β sin q ⎤ ⎦ ∂α nm 3 ⎣ ∂ Re δ w ⎡ = n β α 2 − β 2 − t 2 cos q − β m3 cos (δ m sin p ) + nt α 2 + β 2 + t 2 sin q ⎤ ⎦ ∂β nm3 ⎣

(

)

(

)

(

)

(

m = 4 t 4 + α 4 + β 4 + 2 t 2α 2 + t 2 β 2 − α 2 β 2

n = α2− β2

1 2t β arctg 2 2 t +α 2 − β 2

∂ Re = w ⎡⎣n cos ( δ m sin p ) − m cos q ⎤⎦ ∂δ

p=

∂ Im αδ w ⎡ = n α 2 − β 2 + t 2 sin q + m 3 sin (δ m sin p ) − 2 tn β cos q ⎤ ⎦ ∂α nm 3 ⎣

w= e

(

)

∂ Im δ w ⎡ = n β β 2 − α 2 + t 2 sin q − β m3 sin ( δ m sin p ) + nt α 2 + β 2 + t 2 cos q ⎤ ⎦ ∂β nm 3 ⎣

(

)

(

)

)

δ ( n − m cos p )

q = p − δ m sin p

∂ Im = w ⎡⎣n sin (δ m sin p ) + m sin q ⎤⎦ ∂δ

¶ µ −t + d = CΓ(−Y ) M + t sin Y arctan M ¶ µ ¡ ¢Y /2 t + CΓ(−Y ) G2 + t2 . sin Y arctan G ¡

2

¢ 2 Y /2

Moreover, if we assume f (X) = X, the mean, the variance, the skewness and the kurtosis of CGMY distributions are given by: ¢ ¡ mf (C, G, M, Y ) = C¡ M Y −1 − GY −1¢ Γ(1 − Y ); σf2 (C, G, M, Y ) = C M Y −2 + GY −2 Γ(2 − Y );

12

162

CAVIEZEL ET AL: SEMIPARAMETRIC ESTIMATORS

sf (C, G, M, Y ) =

¡ ¢ C M Y −3 − GY −3 Γ(3 − Y )

; 3/2 (C (M Y¡−2 + GY −2 ) Γ(2¢− Y )) C M Y −4 + GY −4 Γ(4 − Y ) kf (C, G, M, Y ) = 3 + 2. (C (M Y −2 + GY −2 ) Γ(2 − Y ))

The derivatives and II.

3

2 ∂mf (θ) ∂σf (θ) ∂ Im(φX (t)) ∂θk , ∂θk ∂θk

X (t)) and ∂ Re(φ are given in Tables I ∂θk

An empirical comparison between the GT moment estimator and the maximum likelihood estimator of stable Paretian distributions

In this section we compare the GT moment estimator and the MLE obtained by inverting the characteristic function of stable Paretian distributions (see Rachev and Mittnik (2000)). In particular, we test the above semi-parametric estimator for stable distributions using simulated data. Therefore, using the algorithm proposed by Chambers et al. (see Chambers et al. (1976)), we generate N (N =200,. . . ,5000 with step 100) stable distributions Sα (σ, β, μ) with parameters α = 0.51, 0.76, 1.26, 1.51, 1.76; β = −1, −0.5, 0.5, 1; σ = 1; μ = 1. Then we estimate the parameters on the simulated data (for each N ) considering both estimating methods: the GT moment estimator, and MLE valued inverting the characteristic function with the FFT. As starting point for the GT moment and MLE estimators we use the parameters obtained estimating the series of N elements with the McCulloch quantile method (see, among others, Rachev and Mittnik (2000)). We obtain the results of GT moment minimizing the sum of the asymptotic variances subject to the usual constraints. This computation requires less time than the MLE approximation. Minimizing the determinant of the asymptotic variance matrix we get more robust results, but we need much more computational time to approximate the parameters. We measure (on average) the absolute value of the percentage of the distance between the parameters of simulated series and the estimated ones for the different α ,σ, β,¯ and μ i.e., we compute the average (varying N ¯ ¯ ¯ ) of |∆θGT | = ¯ θGT −θsimulation ¯ ¯ −θsimulation ¯ ¯ θsimulation ¯, and similarly of |∆θMLE | = ¯ θMLE ¯ , where θ = α, θsimulation σ, β, μ. These results are given in Table IV where we remark in bold the best approximations. We observe that the sum of the all absolute errors is higher for the MLE. In addition, the empirical analysis shows that the GT moment estimators present a very good performance even in comparison to those obtained with the MLE method.

4

Concluding remarks

In the paper we discussed the application of the estimating function method to value the parameters of distributions defined only by their characteristic func13

CAVIEZEL ET AL: SEMIPARAMETRIC ESTIMATORS

163

Table IV This table summarizes the average of the absolute errors we have using either a GT moment estimator or a MLE estimator for different values of α ( α = 0.51; 0.76;1.26;1.51;1.76 ) and β ( β = −1; − 0.5; 0.5;1 ) and σ = 1; μ = 1 . Moment Beta

-1

-0.5

0.5

1

Alpha Δα

Δσ

Δβ

Δμ

Δα

Δσ

Δβ

Δμ

Δα

Δσ

Δβ

Δμ

Δα

Δσ

Δβ

Δμ

0.51 0.76 1.26 1.51 1.76 MLE

0.07 0.04 0.04

0.28 0.06 0.04

0.10 0.01 0.04

0.52 0.58 0.43

0.04 0.04 0.03

0.09 0.07 0.03

0.07 0.08 0.09

0.12 0.24 0.19

0.03 0.04 0.03

0.11 0.08 0.03

0.09 0.08 0.08

0.10 0.21 0.20

0.04 0.04 0.04

0.09 0.05 0.03

0.01 0.01 0.03

0.28 0.52 0.40

0.03 0.03

0.03 0.02

0.05 0.09

0.12 0.09

0.03 0.03

0.02 0.03

0.15 0.36

0.07 0.04

0.03 0.03

0.02 0.02

0.14 0.31

0.06 0.04

0.04 0.05

0.03 0.05

0.05 0.07

0.13 0.14

Beta Alpha Δα 0.51 0.76 1.26 1.51 1.76

0.02 0.23 0.02 0.02 0.02

-1

-0.5

0.5

1

Δσ

Δβ

Δμ

Δα

Δσ

Δβ

Δμ

Δα

Δσ

Δβ

Δμ

Δα

Δσ

Δβ

Δμ

0.20 0.31 0.02 0.02 0.01

0.62 0.19 0.07 0.06 0.05

0.47 1.54 0.33 0.09 0.04

0.03 0.13 0.02 0.02 0.02

0.25 0.10 0.02 0.02 0.02

0.79 0.50 0.16 0.12 0.18

0.25 0.78 0.24 0.06 0.04

0.03 0.16 0.02 0.02 0.02

0.21 0.13 0.02 0.02 0.02

0.09 0.14 0.08 0.11 0.16

0.05 0.57 0.16 0.05 0.04

0.02 0.23 0.03 0.02 0.02

0.15 0.31 0.02 0.01 0.01

0.02 0.19 0.01 0.00 0.00

0.04 1.54 0.28 0.06 0.04

tions. The proposed methodology showed good versatility, since it could be applied to any bounded function of the underlying random variable. In particular, we propose two EF estimators for the parameters of stable Paretian distributions and other infinitely divisible distributions. Finally we have proposed an empirical comparison based on simulated data of stable Paretian distributions. The good results obtained with the EF moment estimator even with respect to the MLE method, suggest that probably we could make further improvements in parameter estimation using others bounded functions. Acknowledgement: The authors thank for helpful comments seminar audiences at AMAT 2008 (Memphis, USA). For their helpful support in writing the tables, we thank Valentina Acerbis, Marco Cassader and Sebastiano Vitali. Rachev’s research was supported by grants from the Division of Mathematical, Life and Physical Science, College of Letters and Science, University of California, Santa Barbara, and the Deutschen Forschungsgemeinschaft. Sergio Ortobelli and Valeria Caviezel were partially supported by grants from ex-murst 60%, 2007 and 2008.

References [1] M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 1970.

14

164

CAVIEZEL ET AL: SEMIPARAMETRIC ESTIMATORS

[2] P. Carr, H. Geman, D.H. Madan and M. Yor, The Fine Structure of Asset Returns, Journal of Business, 75, 305-332, (2002). [3] J.M. Chambers, C.L. Mallows and B.W. Stuck, A Method for Simulating Stable Random Variables, Journal of the American Statistical Association, 71, 340-344, (1976). [4] V.P. Godambe, Estimating Functions, Oxford University Press, Oxford, 1991. [5] V.P. Godambe and M. Thompson, An Extension of Quasi-likelihood Estimation (with discussion), Journal of Statistical Planning and Inference, 22, 137-172, (1989). [6] Kim Y.S., Rachev S.T., Bianchi M-L., Fabozzi F. Financial Market Models with Levy Processes and Time-Varying Volatility, Journal of Banking and Finance, 32/7, 1363-1378, (2008). [7] D.X. Li and H.J. Turtle, Semi-parametric ARCH Models: An Estimating Function Approach, Journal of Business & Economic Statistics, 18, 174186, (2000). [8] S. Ortobelli and N. Topaloglou, Testing for Preference Orderings Efficiency, Tecnical Report, University of Bergamo, (2008). [9] S. Rachev and S. Mittnik, Stable Paretian Models in Finance, Wiley & Sons, New York, 2000. [10] K. Sato, Lévy Processes and Infinitely Divisible Distributions, Cambridge University Press, Cambridge, 1999. [11] M.C.K. Tweedie, An Index which Distinguishes between some Important Exponential Families, in Statistics: Applications and new Directions, Proc. Indian statistical institute golden Jubilee International conference (I. Ghosh and J. Roy eds), 1984, 579-604.

15

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS , VOL.8, NO.1, 165-179,2010, COPYRIGHT 2010 EUDOXUS PRESS, 165LLC

Frequency selective parameterized wavelets of length ten David W. Roach Department of Mathematics and Statistics Murray State University Murray, KY 42071 [email protected] Abstract In this paper, the complete parameterization of the length ten wavelets is given with no parameter constraints. Among this class, examples of “frequency selective” wavelets (subband filters) are highlighted that perform comparable to the FBI 9/7 filter as applied to image compression. Specifically, the parameterization is of the dilation equation coefficients for all trigonometric polynomials m(ω) which satisfy the necessary conditions for orthogonality, that is m(0) = 1 and |m(ω)|2 + |m(ω + π)|2 = 1, but with no restriction on the number of vanishing moments except the zeroth which is part of the necessary conditions. Moreover, specific parameters are given that correspond to the Daubechies wavelets, and a frequency response comparison is given showing that there are “flatter/steeper” frequency responses than the Daubechies wavelets but with fewer vanishing moments. We conclude with an image compression scheme to compare the standard wavelets, the FBI 9/7 biorthogonal wavelets with symmetric boundaries, and some “frequency selective” parameterized wavelets.

Keywords: Wavelets, coefficients, orthogonal, parameterization

1

Introduction

Typically, when someone chooses to use a wavelet in an application, they use the default choices of the standard Daubechies wavelets [3] or the biorthogonal wavelets such as the FBI 9/7 filter [1]. Unfortunately, the wavelet needed for most applications is highly dependent on the data being transformed by the wavelet. In this paper, we give a parameterization for the length ten orthogonal wavelets which allows one to consider a whole continuum of scaling functions that vary from their number of vanishing moments to their regularity. The length of a scaling function is referring directly to the number of nonzero dilation equation coefficients which is one larger than the degree of the associated trigonometric polynomial. The length also refers to the nonzero support of the 1

166

2

Frequency selective parameterized wavelets of length ten

scaling function. For instance, the length ten scaling functions have ten dilation coefficients, an associated trigonometric polynomial of degree nine, and is supported on the interval [0, 9]. The standard Daubechies’ wavelets were constructed using a spectral factorization method in such a way that they have minimal length, maximal number of vanishing moments, and minimal phase. For example, the Daubechies’ wavelets of lengths two, four, six, eight, and ten with minimal phase each have one, two, three, four, and five vanishing moments respectively. Vanishing moments allow polynomial data within the signal to be well approximated. In [4], Lai and R. constructed explicit parameterizations of all the univariate orthogonal scaling functions of lengths four, six, eight, and ten, but require one to solve a transcendental parameter constraint for both lengths eight and ten. This current paper removes this parameter constraint for the length ten case (refer to [8] for the unconstrained length eight parameterization). Other researchers have investigated the parameterization of orthogonal wavelets (see [14]). It appears that Schneid and Pittner [12] were the first to give formulas that would lead to the explicit parameterizations for the class of finite length orthogonal scaling functions after finding the Kronecker product of some matrices for wavelet lengths of two through ten. Colella and Heil investigated the length four parameterization in [2]. Others have constructed parameterizations for biorthogonal wavelets as well as multiwavelets (see [11] and [9]). Recently, Regensburger, in [7], constructed the explicit parameterizations for the orthogonal scaling functions with multiple vanishing moments up to length ten by first solving the linear system of equations that result from the vanishing moment conditions and then solving the necessary condition for orthogonality. This current paper improves on the parameterizations of the past in that the number of vanishing moments is unrestricted and the parameterizations are explicitly stated with no parameter constraints for the class of length ten wavelets.

2

Necessary Conditions for Orthogonality

The following necessary conditions for orthogonality are well known in the literature (see [3], [5], and others). Consider a scaling function φ that satisfies the dilation equation N X φ(x) = hk φ(2x − k) k=0

and its associated trigonometric polynomial m of degree N which is given by m(ω) =

N X

hk eikω .

k=0

It is well known , that m can be written as an infinite product. In order for this product to converge, m must not vanish at the origin, i.e. m(0) = c 6= 0. This

David W. Roach

3

condition immediately implies, where we choose the normalization c = 1, that N X

hk = 1.

(1)

k=0

Moreover, the necessary condition for the orthogonality of φ with its integer shifts is given by |m(ω)|2 + |m(ω + π)|2 = 1. (2) This condition is equivalent to the dilation coefficients satisfying a system of nonlinear equations, specifically NX −2j

hk hk+2j =

k=0

N −1 1 δ(j), j = 0, . . . , 2 2

where δ(0) = 1 and δ(j) = 0 for j 6= 0. For the length ten case(N = 9)that we are considering currently, we have the following underdetermined nonlinear system: h20 + h21 + h22 + h23 + h24 + h25 + h26 + h27 + h28 + h29

=

h0 h2 + h1 h3 + h2 h4 + h3 h5 + h4 h6 + h5 h7 + +h6 h8 + h7 h9 h0 h4 + h1 h5 + h2 h6 + h3 h7 + h4 h8 + h5 h9

= =

1 2 0 0

h0 h6 + h1 h7 + h2 h8 + h3 h9 h0 h8 + h1 h9

= =

0 0.

Additionally, these two conditions (1) and (2) imply the zeroth vanishing moment condition m(π) = 0 or equivalently the linear equations (N −1)/2

X

(N −1)/2

h2k =

k=0

X

k=0

h2k+1 =

1 . 2

Because of the strong patterns between the odd and even indices for the dilation coefficients, it is useful to relabel the dilation equation coefficients in the following fashion n X ak e2kiω + bk e(2k+1)iω m(ω) = k=0

where we let n = (N −1)/2. Note, since there are no odd length scaling functions satisfying the necessary condition for orthogonality, N will always be an odd integer. As a means of summary with our new notation, we conclude with the following statements. Given a scaling function φ and its associated trigonometric polynomial n X m(ω) = ak e2kiω + bk e(2k+1)iω k=0

167

168

4

Frequency selective parameterized wavelets of length ten

of degree 2n + 1, the necessary condition for orthogonality, |m(ω)|2 + |m(ω + π)|2 = 1, is equivalent to the following system of nonlinear equations: n−j X

ak ak+j + bk bk+j =

k=0

1 δ(j), j = 0, . . . , n − 1 2

where δ(0) = 1 and δ(j) = 0 for j 6= 0.

3

Length Four

Although the length four parameterization is well known (see [8, 14]), it is used in the construction of the length ten parameterization and is presented here for completeness. For length four (N = 3 and n = 1), the nonlinear system of equations is a0 + a1 b0 + b1 a20 + a21 + b20 + b21 a0 a1 + b 0 b 1

1 2 1 = 2 1 = 2 = 0. =

(3) (4) (5) (6)

Subtracting twice equation (6) from equation (5) gives (a0 − a1 )2 + (b0 − b1 )2 =

1 . 2

This equation allows the introduction of a free parameter, that is a0 − a1

=

b0 − b1

=

1 √ sin θ 2 1 √ cos θ. 2

(7) (8)

Combining equations (3) and (4) with (7) and (8) gives the length four parameterization 1 1 + √ sin θ, 4 2 2 1 1 = − √ sin θ, 4 2 2

1 1 + √ cos θ, 4 2 2 1 1 = − √ cos θ. 4 2 2

a0 =

b0 =

a1

b1

These formulas are well known (see [14],[2],[8], and others). To aid in the construction of the longer parameterizations, a different period and phase shift are

David W. Roach

5

chosen for the length four solution, that is θ = 2α − π/4. With this substitution and some simplification, the length four solution can be written as a0

=

b0

=

a1

=

b1

=

1 (1 − cos 2α + sin 2α) 4 1 (1 + cos 2α + sin 2α) 4 1 (1 + cos 2α − sin 2α) 4 1 (1 − cos 2α − sin 2α) 4

where this form will simplify future computations. It should be noted that this parameterization is a necessary representation for the coefficients and upon substituting them back into the system of equations (3)-(6), we see that they are also sufficient.

4

Length Ten

For the construction of parameterizations for length six and eight see [8]. For length ten (N = 9 and n = 4), the nonlinear system of equations is given by a0 + a1 + a2 + a3 + a4

=

b0 + b1 + b2 + b3 + b4

=

a20 + a21 + a22 + a23 + a24 + b20 + b21 + b22 + b23 + b24

=

a0 a1 + a1 a2 + a2 a3 + a3 a4 + b 0 b 1 + b 1 b 2 + b 2 b 3 + b 3 b 4

=

1 2 1 2 1 2 0

a0 a2 + a1 a3 + a2 a4 + b 0 b 2 + b 1 b 3 + b 2 b 4 a0 a3 + a1 a4 + b 0 b 3 + b 1 b 4

= =

0 0

(13) (14)

a0 a4 + b 0 b 4

=

0.

(15)

(9) (10) (11) (12)

An important step in the construction is establishing the connection between the sums of the even and odd indexed coefficients back to the length four parameterization. More specifically, the sums a0 + a2 + a4 , a1 + a3 , b0 + b2 + b4 , and b1 + b3 satisfy the system of equations associated with the length four parameterization, i.e. (a0 + a2 + a4 ) + (a1 + a3 ) = (b0 + b2 + b4 ) + (b1 + b3 ) = (a0 + a2 + a4 )2 + (a1 + a3 )2 + (b0 + b2 + b4 )2 + (b1 + b3 )2

=

(a0 + a2 + a4 )(a1 + a3 ) + (b0 + b2 + b4 )(b1 + b3 ) =

1 2 1 2 1 2 0.

169

170

6

Frequency selective parameterized wavelets of length ten

The third equation is equivalent to the sum of equations (11) and (13), and the last one is equivalent to the sum of equations (12) and (14). Therefore, we can use the length four parameterization for these sums, i.e. a0 + a2 + a4

=

b0 + b2 + b4

=

a1 + a3

=

b1 + b3

=

1 (1 − cos 2α + sin 2α) 4 1 (1 + cos 2α + sin 2α) 4 1 (1 + cos 2α − sin 2α) 4 1 (1 − cos 2α − sin 2α). 4

In an effort to linearize the system of equations, note that the sum and difference of equation (11) and twice equation (15) give the two equations: (a0 + a4 )2 + (b0 + b4 )2

=

(a0 − a4 )2 + (b0 − b4 )2

=

1 − a21 − a22 − a23 − b21 − b22 − b23 (16) 2 1 − a21 − a22 − a23 − b21 − b22 − b23 := p2 . (17) 2

Although the right hand side, p2 , has not yet been determined, we use the fact that the right-hand sides of equations (16) and (17) are equivalent and introduce two new free parameters β and γ in the following fashion: a0 + a4 b0 + b4

= p cos β = p sin β

a0 − a4 b0 − b4

= p cos γ = p sin γ.

There are now 8 linear equations and ten unknowns. For the last two equations, we introduce two additional free parameters q and r for the differences between the odd indices and rotate them using an orthogonal matrix which depends on the parameter γ, i.e., a1 − a3 b1 − b3

= q cos γ − r sin γ = q sin γ + r cos γ.

The decision to rotate these free parameters by the orthogonal matrix depending on γ was based on the nonlinear relationship in equation (14) which involves cos γ and sin γ. Now, solving the linear system of equations involving the free parameters α, β, γ, p, q, and r which necessarily satisfy the nonlinear system of equations yields

David W. Roach

7

a0

=

b0

=

a1

=

b1

=

a2

=

b2

=

a3

=

b3

=

a4

=

b4

=

p (cos β + cos γ) 2 p (sin β + sin γ) 2 1 1 (1 + cos 2α − sin 2α) + (q cos γ − r sin γ) 8 2 1 1 (1 − cos 2α − sin 2α) + (q sin γ + r cos γ) 8 2 1 (1 − cos 2α + sin 2α) − p cos β 4 1 (1 + cos 2α + sin 2α) − p sin β 4 1 1 (1 + cos 2α − sin 2α) − (q cos γ − r sin γ) 8 2 1 1 (1 − cos 2α − sin 2α) − (q sin γ + r cos γ) 8 2 p (cos β − cos γ) 2 p (sin β − sin γ). 2

As far as the nonlinear equations, notice that equation (15) is satisfied immediately. Now plugging the linear solutions into equation (14), we get a0 a3 + a1 a4 + b 0 b 3 + b 1 b 4 p (−4q + cos β + sin β + cos(2α + β) − sin(2α + β)) 8

=

0

=

0

and since p = 0 would result in a constrained solution, we have the parameterization of q as q

= =

1 (cos β + sin β + cos(2α + β) − sin(2α + β)) 4 1 (cos α − sin α) cos(α + β). 2

Using this value of q and substituting the linear equation solutions into equation (12), we see that equation (12) is now satisfied while equation (13) produces a quadratic equation in terms of the unknown p, that is a0 a2 + a1 a3 + a2 a4 + b 0 b 2 + b 1 b 3 + b 2 b 4 = 0 1 p − (cos α + sin α) sin(α + β)p + · · · 2 1 (16r2 + 2 cos(2(α + β)) − sin(2(α + β)) + 2 sin 2α + sin 2β − 2) = 0 64 2

which has a solution in terms of the free parameter r of  p 1 p = (cos α + sin α) sin(α + β) ± 1 − 4r2 − cos(2(α + β)) 4

171

172

8

Frequency selective parameterized wavelets of length ten

with a constraint on the size of the parameter r of 1 − cos(2(α + β)) (18) 4 1 = sin2 (α + β) (19) 2 that is necessary to keep the parameter p real. So, we introduce the free parameter δ for r that satisfies the inequality (18), i.e. r2



1 r = √ cos δ sin(α + β). 2 After substituting this parameterization of r back into the expression for p, we have   √ 1 p = sin(α + β) cos α + sin α − 2 sin δ . 4 Substituting these parameterizations of p, q, and r back into the nonlinear system, shows that in fact these necessary conditions on the parameters are also sufficient. Therefore the complete parameterization of all coefficient vectors of length ten that satisfy the necessary conditions for orthogonality can be parameterized by the following:   √ 1 p = sin(α + β) cos α + sin α − 2 sin δ 4 1 q = (cos α − sin α) cos(α + β) 2 1 r = √ cos δ sin(α + β) 2 p (cos β + cos γ) a0 = 2 p b0 = (sin β + sin γ) 2 1 1 (1 + cos 2α − sin 2α) + (q cos γ − r sin γ) a1 = 8 2 1 1 b1 = (1 − cos 2α − sin 2α) + (q sin γ + r cos γ) 8 2 1 a2 = (1 − cos 2α + sin 2α) − p cos β 4 1 b2 = (1 + cos 2α + sin 2α) − p sin β 4 1 1 a3 = (1 + cos 2α − sin 2α) − (q cos γ − r sin γ) 8 2 1 1 b3 = (1 − cos 2α − sin 2α) − (q sin γ + r cos γ) 8 2 p a4 = (cos β − cos γ) 2 p b4 = (sin β − sin γ) 2

David W. Roach Wavelet Haar D4 D6 D8 D10 OJ10 T10 MF10

9 α -2.35619449 -2.61799388 -2.85603576 -3.08353313 -3.30496928 -3.27396878 0.71389116 0.74143253

β -2.35619449 -2.09439510 -1.96184952 -1.88256239 -1.80914574 -1.82536723 -2.86392827 -2.94548381

γ -2.35619449 -2.09439510 -1.96184952 -1.88256239 -1.85080961 -1.87718786 1.66872168 1.51834616

δ 1.57079633 1.30899694 1.07095506 0.93539307 0.88683222 0.93186197 2.43807534 2.20225600

Table 1: Parameters associated with the standard Daubechies wavelets and some other length ten wavelets.

which is well-defined for any parameters α, β, γ, and δ ∈ IR. This parameterization contains all of the standard Daubechies wavelets of length ten or less where the parameters needed for each are given in Table 1 along with some other wavelets which will be used in the image compression comparison. Figure 1 shows the scaling functions associated with the parameters chosen in Table 1.

5

Frequency Selective Wavelets and Image Compression

In Daubechies seminal work [3], an emphasis is placed on discrete orthogonal wavelets which have a maximum number of vanishing moments. In that work, an example is put forth where one of the vanishing moments is moved away from ω = −π towards −π/2 and the comment is made that this wavelet is much closer to a “realistic” subband coding filter (see pg. 201 in [3]). In a later work, Ojanen in [6] develops a scheme to find the maximal regularity for the scaling functions based on moving a finite number vanishing moments towards the center. Ojanen lists one optimal length 10 wavelet which we will call OJ10(see Table 1 for the necessary parameters). The frequency response graphs in Figure 2 and 3 illustrate how the filters will treat various frequencies in the range [−π, π]. An ”ideal” filter (which can not be implemented with a finite convolution) has a frequency response that is one on the interval [−π/2, π/2] and zero otherwise. In Figure 3b, notice that OJ10 uses a zero near -2.6 to improve the “flatness” in this area and subsequently giving it a steeper transition. In the literature, band-pass filters that model the “ideal” filter are called ”frequency selective”. T10 and MF10 both have only the zeroth vanishing moment, but MF10 keeps a shallow profile in the first part of the interval and achieves a steeper transition than D10 or OJ10. T10 is highlighted here because of its extreme steepness at the transition compared to the others, but sacrifices flatness in the first part of the interval.

173

174

10

(a)

(c)

(e)

(g)

Frequency selective parameterized wavelets of length ten

1

1

0.5

0.5

0

0

−0.5

0

1

2

3

4

5

6

7

8

9

(b)

−0.5

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

1

2

3

4

5

6

7

8

9

1

1

0.5

0.5

0

0

−0.5

0

1

2

3

4

5

6

7

8

9

(d)

−0.5

1

1

0.5

0.5

0

0

−0.5

0

1

2

3

4

5

6

7

8

9

(f)

−0.5

1

1

0.5

0.5

0

0

−0.5

0

1

2

3

4

5

6

7

8

9

(h)

−0.5

0

Figure 1: Scaling function plots for the specific parameters in Table 1. (a) Haar, (b) D4, (c) D6, (d) D8, (e) D10, (f) OJ10, (g) T10, and (h) MF10.

David W. Roach

11

1 Haar D4 D6 D8 D10

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

(a)

0

−3

−2

−1

0

1

2

3

0.05 Haar D4 D6 D8 D10

0.045

0.04

0.035

0.03

0.025

0.02

0.015

0.01

0.005

(b)

0

−3

−2.8

−2.6

−2.4

−2.2

−2

1 D10 T10 OJ10 MF10

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

(c)

0

−3

−2

−1

0

1

2

3

0.05 D10 T10 OJ10 MF10

0.045

0.04

0.035

0.03

0.025

0.02

0.015

0.01

0.005

(d)

0

−3

−2.8

−2.6

−2.4

−2.2

−2

Figure 2: Frequency response plots |m(ω)| for the scaling functions from Table 1. (a) Haar, D4, D6, D8, and, D10 (b) zoomed in plot of previous, (c) D10, T10, OJ10, and MF10, (d) zoomed in plot of previous.

175

176

12

Frequency selective parameterized wavelets of length ten

Wavelet D10 OJ10 T10 MF10 FBI 9/7

Barb 3034 3039 2822 2793 2900

Boat 1052 1050 957 960 976

Wavelet D10 OJ10 T10 MF10 FBI 9/7

Barb 2064 2029 2409 2379 2166

Boat 1651 1665 1603 1603 1593

Wavelet D10 OJ10 T10 MF10 FBI 9/7

Barb 2818 2799 3163 3172 2856

Boat 1799 1785 1829 1826 1756

Row=469 Lena Marm 820 73 813 73 880 95 846 82 798 58 Row=191 Lena Marm 951 113 943 105 986 256 948 163 880 106 Row=287 Lena Marm 929 76 917 70 946 184 941 116 854 90

Bark 2973 2946 2610 2640 2795

Fing 1676 1675 1670 1669 1668

Sand 1943 1954 1993 1959 1825

Bark 2384 2358 2210 2188 2198

Fing 1544 1532 1529 1532 1548

Sand 2317 2311 2255 2241 2090

Bark 2407 2398 2272 2259 2182

Fing 1629 1616 1555 1568 1634

Sand 1972 1960 2126 2099 1911

Table 2: The ℓ1 norm of the wavelet coefficients for a one-level decomposition of a single row from some test images. The lowest ℓ1 norm has been boldfaced for convenience.

In our first numerical experiment, we compare the four length-ten filters D10, OJ10, T10, and MF10 with the biorthogonal FBI 9/7 wavelet with a single level decomposition of a row from our test images and compute the ℓ1 norm of the wavelet coefficients. A lower ℓ1 norm would suggest a more efficient representation for the vector. We chose three random rows and tested all five wavelets on seven different images using the specified row. As can be seen in Table 2, the 9/7 filter appears to do better more than half of the time. The lowest ℓ1 norm has been boldfaced for each image. The results are not conclusive, but are still interesting. In a second more comprehensive numerical experiment, we present an image compression comparison for the wavelets D10, OJ10, T10, MF10, and the FBI biorthogonal 9/7 wavelet with symmetric boundary extensions. Because of its common use as an industry standard and similar length, we chose to include the biorthogonal 9/7 wavelet in the comparison. The 9/7 biorthogonal wavelet has one advantage over the the length ten parameterization in that it is symmetric. This symmetry can be used to improve the performance of image compression at the image boundaries. We have included this advantage in our numerical results. The details of the numerical experiment are as follows:

David W. Roach

13

50

50

100

100

150

150

200

200

250

250

300

300

350

350

400

400

450

(a)

(c)

(e)

450

500 50

100

150

200

250

300

350

400

450

500

(b)

500

50

50

100

100

150

150

200

200

250

250

300

300

350

350

400

400

450

450

500 50

100

150

200

250

300

350

400

450

500

(d)

50

50

100

150

150

200

200

250

250

300

300

350

350

400

400

450

450

50

100

150

200

250

300

350

400

450

500

(f)

100

150

200

250

300

350

400

450

500

50

100

150

200

250

300

350

400

450

500

500

100

500

50

500 50

100

150

200

250

300

350

400

450

500

50

100

150

200

250

300

350

400

450

(g)

500 50

100

150

200

250

300

350

400

450

500

Figure 3: The seven test images used in the compression scheme (512 × 512 grayscale images): (a) Barb, (b) Boat, (c) Lena, (d) Marm, (e) Bark, (f) Fing, and (g) Sand.

177

178

14

Frequency selective parameterized wavelets of length ten Wavelet D10 OJ10 T10 MF10 FBI9/7

Barb 26.29 26.34 26.43 26.43 26.30

Boat 28.74 28.76 28.59 28.69 29.22

Lena 32.30 32.32 32.11 32.43 32.81

Marm 34.78 34.93 35.01 35.57 35.11

Bark 21.27 21.31 21.41 21.44 21.43

Fing 30.14 30.18 30.35 30.28 30.14

Sand 23.27 23.28 23.42 23.44 23.46

Table 3: PSNR results for the seven images Barb, Boat, Lena, Marm, Bark, Fing, and Sand using the wavelets D10, OJ10, T10, MF10, and the FBI biorthogonal 9/7. The best PSNR for each image is boldfaced.

• Seven level decomposition with periodic boundaries (except for 9/7 which has symmetric extensions) using D10, OJ10, T10, MF10, and 9/7 FBI. Note: All of the scaling functions presented in this paper need to be √ normalized by a multiple of 2 during implementation. • Embedded Zero-tree (EZW) compression (see [13] and [10]) with a file size ratio of 32:1. For this experiment, all of the images are 512 × 512 with a PGM file-size of 256Kb and a compressed file-size of 8Kb. This particular EZW implementation is not completely optimized and would not necessarily yield the maximum PSNR possible but serves well as a comparative measure of the true compressibility of the wavelet decomposition. • Seven level reconstruction followed by a Peak Signal to Noise Ratio (PSNR), i.e. v u 512 X 512 u 1 X |Ai,j − A˜i,j |2 RM SE = t 5122 i=1 i=1   255 P SN R = 20 log10 RM SE where Ai,j is the original matrix of grayscale values and A˜i,j is the compressed version. The results from the experiment are given in Table 3.

References [1] Brislawn, C., J. Bradley, R. Onyshczak, and T. Hopper, The FBI compression standard for digitized fingerprint images, Appl. Digital Image Process., Proc. SPIE 2847 (1996), 344-355. [2] D. Colella and C. Heil, The characterization of continuous, four-coefficient scaling functions and wavelets, IEEE Trans. Inf. Th., Special Issue on

David W. Roach

15

Wavelet Transforms and Multiresolution Signal Analysis, 38 (1992), pp. 876-881. [3] I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, 1992. [4] M. J. Lai and D. W. Roach, Parameterizations of univariate orthogonal wavelets with short support, Approximation Theory X: Splines, Wavelets, and Applications, C. K. Chui, L. L. Schumaker, and J. St¨ockler (eds.), Vanderbilt University Press, Nashville, 2002, 369–384. [5] W. Lawton, Necessary and sufficient conditions for constructing orthonormal wavelet bases, J. Math. Phys. 32 (1991), 57–61. [6] H. Ojanen, Orthonormal compactly supported wavelets with optimal sobolev regularity, Applied and Computational Harmonic Analysis 10, pp. 93-98 (2001). [7] G. Regensburger, Parametrizing compactly supported orthonormal wavelets by discrete moments. Appl. Algebra Eng., Commun. Comput. 18, 6 (Nov. 2007), 583-601. [8] D. W. Roach, The Parameterization of the Length Eight Orthogonal Wavelets with No Parameter Constraints, Approximation Theory XII: San Antonio 2007, M. Neamtu and L. Schumaker (eds.), Nashboro Press, pp. 332-347, 2008. [9] H. L. Resnikoff, J. Tian, R. O. Wells, Jr., Biorthogonal wavelet space: parametrization and factorization, SIAM J. Math. Anal. 33 (2001), no. 1, 194–215. [10] A. Said and W. A. Pearlman, A new fast and efficient image codec based on set partitioning in hierarchical trees, IEEE Transactions on Circuits and Systems for Video Technology 6 (1996), 243–250. [11] J. Qingtang, Paramterization of m-channel orthogonal multifilter banks, Advances in Computational Mathematics 12 (2000), 189–211. [12] J. Schneid and S. Pittner, On the parametrization of the coefficients of dilation equations for compactly supported wavelets, Computing 51 (1993), 165–173. [13] J. M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Transactions Signal Processing 41 (1993), 3445–3462. [14] R. O. Wells, Jr., Parameterizing smooth compactly supported wavelets, Trans. Amer. Math. Soc. 338 (1993), 919–931. [15] M. V. Wickerhauser, Comparison of picture compression methods: wavelet, wavelet packet, and local cosine transform coding, Wavelets: theory, algorithms, and applications (Taormina, 1993), Academic Press, San Diego, CA, 1994, 585–621.

179

180

Instructions to Contributors Journal of Concrete and Applicable Mathematics A quartely international publication of Eudoxus Press, LLC, of TN.

Editor in Chief: George Anastassiou Department of Mathematical Sciences University of Memphis Memphis, TN 38152-3240, U.S.A.

1. Manuscripts hard copies in triplicate, and in English, should be submitted to the Editor-in-Chief: Prof.George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152, USA. Tel. 901.678.3144 e-mail: [email protected] Authors may want to recommend an associate editor the most related to the submission to possibly handle it. Also authors may want to submit a list of six possible referees, to be used in case we cannot find related referees by ourselves. 2. Manuscripts should be typed using any of TEX,LaTEX,AMS-TEX,or AMS-LaTEX and according to EUDOXUS PRESS, LLC. LATEX STYLE FILE. (Click HERE to save a copy of the style file.)They should be carefully prepared in all respects. Submitted copies should be brightly printed (not dot-matrix), double spaced, in ten point type size, on one side high quality paper 8(1/2)x11 inch. Manuscripts should have generous margins on all sides and should not exceed 24 pages. 3. Submission is a representation that the manuscript has not been published previously in this or any other similar form and is not currently under consideration for publication elsewhere. A statement transferring from the authors(or their employers,if they hold the copyright) to Eudoxus Press, LLC, will be required before the manuscript can be accepted for publication.The Editor-in-Chief will supply the necessary forms for this transfer.Such a written transfer of copyright,which previously was assumed to be implicit in the act of submitting a manuscript,is necessary under the U.S.Copyright Law in order for the publisher to carry through the dissemination of research results and reviews as widely and effective as possible.

181

4. The paper starts with the title of the article, author's name(s) (no titles or degrees), author's affiliation(s) and e-mail addresses. The affiliation should comprise the department, institution (usually university or company), city, state (and/or nation) and mail code. The following items, 5 and 6, should be on page no. 1 of the paper. 5. An abstract is to be provided, preferably no longer than 150 words. 6. A list of 5 key words is to be provided directly below the abstract. Key words should express the precise content of the manuscript, as they are used for indexing purposes. The main body of the paper should begin on page no. 1, if possible. 7. All sections should be numbered with Arabic numerals (such as: 1. INTRODUCTION) . Subsections should be identified with section and subsection numbers (such as 6.1. Second-Value Subheading). If applicable, an independent single-number system (one for each category) should be used to label all theorems, lemmas, propositions, corrolaries, definitions, remarks, examples, etc. The label (such as Lemma 7) should be typed with paragraph indentation, followed by a period and the lemma itself. 8. Mathematical notation must be typeset. Equations should be numbered consecutively with Arabic numerals in parentheses placed flush right,and should be thusly referred to in the text [such as Eqs.(2) and (5)]. The running title must be placed at the top of even numbered pages and the first author's name, et al., must be placed at the top of the odd numbed pages. 9. Illustrations (photographs, drawings, diagrams, and charts) are to be numbered in one consecutive series of Arabic numerals. The captions for illustrations should be typed double space. All illustrations, charts, tables, etc., must be embedded in the body of the manuscript in proper, final, print position. In particular, manuscript, source, and PDF file version must be at camera ready stage for publication or they cannot be considered. Tables are to be numbered (with Roman numerals) and referred to by number in the text. Center the title above the table, and type explanatory footnotes (indicated by superscript lowercase letters) below the table. 10. List references alphabetically at the end of the paper and number them consecutively. Each must be cited in the text by the appropriate Arabic numeral in square brackets on the baseline. References should include (in the following order): initials of first and middle name, last name of author(s) title of article,

182

name of publication, volume number, inclusive pages, and year of publication. Authors should follow these examples: Journal Article 1. H.H.Gonska,Degree of simultaneous approximation of bivariate functions by Gordon operators, (journal name in italics) J. Approx. Theory, 62,170-191(1990).

Book 2. G.G.Lorentz, (title of book in italics) Bernstein Polynomials (2nd ed.), Chelsea,New York,1986.

Contribution to a Book 3. M.K.Khan, Approximation properties of beta operators,in(title of book in italics) Progress in Approximation Theory (P.Nevai and A.Pinkus,eds.), Academic Press, New York,1991,pp.483-495.

11. All acknowledgements (including those for a grant and financial support) should occur in one paragraph that directly precedes the References section. 12. Footnotes should be avoided. When their use is absolutely necessary, footnotes should be numbered consecutively using Arabic numerals and should be typed at the bottom of the page to which they refer. Place a line above the footnote, so that it is set off from the text. Use the appropriate superscript numeral for citation in the text. 13. After each revision is made please again submit three hard copies of the revised manuscript, including in the final one. And after a manuscript has been accepted for publication and with all revisions incorporated, manuscripts, including the TEX/LaTex source file and the PDF file, are to be submitted to the Editor's Office on a personal-computer disk, 3.5 inch size. Label the disk with clearly written identifying information and properly ship, such as: Your name, title of article, kind of computer used, kind of software and version number, disk format and files names of article, as well as abbreviated journal name. Package the disk in a disk mailer or protective cardboard. Make sure contents of disks are identical with the ones of final hard copies submitted! Note: The Editor's Office cannot accept the disk without the accompanying matching hard copies of manuscript. No e-mail final submissions are allowed! The disk submission must be used.

14. Effective 1 Nov. 2009 for current journal page charges, contact the Editor in Chief. Upon acceptance of the paper an invoice will be sent to the contact author. The fee payment will be due one month from the invoice date. The article will proceed to publication only after the fee is paid. The charges are to be sent, by money order or certified check, in US dollars, payable to Eudoxus Press, LLC, to the address shown on

183

the Eudoxus homepage. No galleys will be sent and the contact author will receive one(1) electronic copy of the journal issue in which the article appears. 15. This journal will consider for publication only papers that contain proofs for their listed results.

 

184

TABLE OF CONTENTS, JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL. 8, NO. 1, 2010

Iterative reconstruction and stability bounds for sampling models, Ernesto AcostaReyes,………………………………………………………………………………….9 Global existence and blow up for solutions to higher order Boussinesq systems, De Godefroy Akmel,……………………………………………………………………..24 Caputo Fractional Multivariate Opial type inequalities on spherical shells, George A. Anastassiou,…………………………………………………………………………..41 Asymptotics for Szego polynomials with respect to a class of weakly convergent measures, Michael Arciero, Lewis Pakula,……………………………………………………..92 A computational approach to the determination of nets, Hans Fetter, Juan H. Arredondo R.,……………………………………………………………………………………..98 Kernel based Wavelets on S3, S. Bernstein, S. Ebert, …………………………….110 A Taste of Ideal Projectors, Boris Shekhtman,…………………………………….125 Semiparametric estimators for heavy tailed distributions, Valeria Caviezel et al,150 Frequency selective parameterized wavelets of length ten, David W. Roach,……165

185

VOLUME 8, NUMBER 2

APRIL 2010

ISSN:1548-5390 PRINT,1559-176X ONLINE

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS

SPECIAL ISSUE II :APPLIED MATHEMATICS AND APPROXIMATION THEORY EUDOXUS PRESS,LLC

186

SCOPE AND PRICES OF THE JOURNAL Journal of Concrete and Applicable Mathematics A quartely international publication of Eudoxus Press,LLC Editor in Chief: George Anastassiou Department of Mathematical Sciences, University of Memphis Memphis, TN 38152, U.S.A. [email protected] The main purpose of the "Journal of Concrete and Applicable Mathematics" is to publish high quality original research articles from all subareas of Non-Pure and/or Applicable Mathematics and its many real life applications, as well connections to other areas of Mathematical Sciences, as long as they are presented in a Concrete way. It welcomes also related research survey articles and book reviews.A sample list of connected mathematical areas with this publication includes and is not restricted to: Applied Analysis, Applied Functional Analysis, Probability theory, Stochastic Processes, Approximation Theory, O.D.E, P.D.E, Wavelet, Neural Networks,Difference Equations, Summability, Fractals, Special Functions, Splines, Asymptotic Analysis, Fractional Analysis, Inequalities, Moment Theory, Numerical Functional Analysis,Tomography, Asymptotic Expansions, Fourier Analysis, Applied Harmonic Analysis, Integral Equations, Signal Analysis, Numerical Analysis, Optimization, Operations Research, Linear Programming, Fuzzyness, Mathematical Finance, Stochastic Analysis, Game Theory, Math.Physics aspects, Applied Real and Complex Analysis, Computational Number Theory, Graph Theory, Combinatorics, Computer Science Math.related topics,combinations of the above, etc. In general any kind of Concretely presented Mathematics which is Applicable fits to the scope of this journal. Working Concretely and in Applicable Mathematics has become a main trend in many recent years,so we can understand better and deeper and solve the important problems of our real and scientific world. "Journal of Concrete and Applicable Mathematics" is a peer- reviewed International Quarterly Journal. We are calling for papers for possible publication. The contributor should send three copies of the contribution to the editor in-Chief typed in TEX, LATEX double spaced. [ See: Instructions to Contributors]

Journal of Concrete and Applicable Mathematics(JCAAM) ISSN:1548-5390 PRINT, 1559-176X ONLINE. is published in January,April,July and October of each year by EUDOXUS PRESS,LLC, 1424 Beaver Trail Drive,Cordova,TN38016,USA, Tel.001-901-751-3553 [email protected] http://www.EudoxusPress.com. Visit also www.msci.memphis.edu/~ganastss/jcaam. Webmaster:Ray Clapsadle Annual Subscription Current Prices:For USA and Canada,Institutional:Print $400,Electronic $250,Print and Electronic $450.Individual:Print $150, Electronic

187

$80,Print &Electronic $200.For any other part of the world add $50 more to the above prices for Print. Single article PDF file for individual $15.Single issue in PDF form for individual $60. No credit card payments.Only certified check,money order or international check in US dollars are acceptable. Combination orders of any two from JoCAAA,JCAAM,JAFA receive 25% discount,all three receive 30% discount. Copyright©2010 by Eudoxus Press,LLC all rights reserved.JCAAM is printed in USA. JCAAM is reviewed and abstracted by AMS Mathematical Reviews,MATHSCI,and Zentralblaat MATH. It is strictly prohibited the reproduction and transmission of any part of JCAAM and in any form and by any means without the written permission of the publisher.It is only allowed to educators to Xerox articles for educational purposes.The publisher assumes no responsibility for the content of published papers. JCAAM IS A JOURNAL OF RAPID PUBLICATION

188

Editorial Board Associate Editors

Editor in -Chief: George Anastassiou Department of Mathematical Sciences The University Of Memphis Memphis,TN 38152,USA tel.901-678-3144,fax 901-678-2480 e-mail [email protected] www.msci.memphis.edu/~anastasg/anlyjour.htm Areas:Approximation Theory, Probability,Moments,Wavelet, Neural Networks,Inequalities,Fuzzyness. Associate Editors: 1) Ravi Agarwal Florida Institute of Technology Applied Mathematics Program 150 W.University Blvd. Melbourne,FL 32901,USA [email protected] Differential Equations,Difference Equations, Inequalities 2) Drumi D.Bainov Medical University of Sofia P.O.Box 45,1504 Sofia,Bulgaria [email protected] Differential Equations,Optimal Control, Numerical Analysis,Approximation Theory 3) Carlo Bardaro Dipartimento di Matematica & Informatica Universita' di Perugia Via Vanvitelli 1 06123 Perugia,ITALY tel.+390755855034, +390755853822, fax +390755855024 [email protected] , [email protected] Functional Analysis and Approximation Th., Summability,Signal Analysis,Integral Equations, Measure Th.,Real Analysis 4) Francoise Bastin Institute of Mathematics University of Liege 4000 Liege

21) Gustavo Alberto Perla Menzala National Laboratory of Scientific Computation LNCC/MCT Av. Getulio Vargas 333 25651-075 Petropolis, RJ Caixa Postal 95113, Brasil and Federal University of Rio de Janeiro Institute of Mathematics RJ, P.O. Box 68530 Rio de Janeiro, Brasil [email protected] and [email protected] Phone 55-24-22336068, 55-21-25627513 Ext 224 FAX 55-24-22315595 Hyperbolic and Parabolic Partial Differential Equations, Exact controllability, Nonlinear Lattices and Global Attractors, Smart Materials 22) Ram N.Mohapatra Department of Mathematics University of Central Florida Orlando,FL 32816-1364 tel.407-823-5080 [email protected] Real and Complex analysis,Approximation Th., Fourier Analysis, Fuzzy Sets and Systems 23) Rainer Nagel Arbeitsbereich Funktionalanalysis Mathematisches Institut Auf der Morgenstelle 10 D-72076 Tuebingen Germany tel.49-7071-2973242 fax 49-7071-294322 [email protected] evolution equations,semigroups,spectral th., positivity 24) Panos M.Pardalos Center for Appl. Optimization University of Florida 303 Weil Hall P.O.Box 116595 Gainesville,FL 32611-6595 tel.352-392-9011 [email protected] Optimization,Operations Research

189

BELGIUM [email protected] Functional Analysis,Wavelets 5) Yeol Je Cho Department of Mathematics Education College of Education Gyeongsang National University Chinju 660-701 KOREA tel.055-751-5673 Office, 055-755-3644 home, fax 055-751-6117 [email protected] Nonlinear operator Th.,Inequalities, Geometry of Banach Spaces 6) Sever S.Dragomir School of Communications and Informatics Victoria University of Technology PO Box 14428 Melbourne City M.C Victoria 8001,Australia tel 61 3 9688 4437,fax 61 3 9688 4050 [email protected], [email protected] Math.Analysis,Inequalities,Approximation Th., Numerical Analysis, Geometry of Banach Spaces, Information Th. and Coding 7) Angelo Favini Università di Bologna Dipartimento di Matematica Piazza di Porta San Donato 5 40126 Bologna, ITALY tel.++39 051 2094451 fax.++39 051 2094490 [email protected] Partial Differential Equations, Control Theory, Differential Equations in Banach Spaces 8) Claudio A. Fernandez Facultad de Matematicas Pontificia Unversidad Católica de Chile Vicuna Mackenna 4860 Santiago, Chile tel.++56 2 354 5922 fax.++56 2 552 5916 [email protected] Partial Differential Equations, Mathematical Physics, Scattering and Spectral Theory

25) Svetlozar T.Rachev Dept.of Statistics and Applied Probability Program University of California,Santa Barbara CA 93106-3110,USA tel.805-893-4869 [email protected] AND Chair of Econometrics and Statistics School of Economics and Business Engineering University of Karlsruhe Kollegium am Schloss,Bau II,20.12,R210 Postfach 6980,D-76128,Karlsruhe,Germany tel.011-49-721-608-7535 [email protected] Mathematical and Empirical Finance, Applied Probability, Statistics and Econometrics 26) John Michael Rassias University of Athens Pedagogical Department Section of Mathematics and Infomatics 20, Hippocratous Str., Athens, 106 80, Greece Address for Correspondence 4, Agamemnonos Str. Aghia Paraskevi, Athens, Attikis 15342 Greece [email protected] [email protected] Approximation Theory,Functional Equations, Inequalities, PDE 27) Paolo Emilio Ricci Universita' degli Studi di Roma "La Sapienza" Dipartimento di Matematica-Istituto "G.Castelnuovo" P.le A.Moro,2-00185 Roma,ITALY tel.++39 0649913201,fax ++39 0644701007 [email protected],[email protected] Orthogonal Polynomials and Special functions, Numerical Analysis, Transforms,Operational Calculus, Differential and Difference equations 28) Cecil C.Rousseau Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA tel.901-678-2490,fax 901-678-2480 [email protected] Combinatorics,Graph Th., Asymptotic Approximations, Applications to Physics 29) Tomasz Rychlik

190

9) A.M.Fink Department of Mathematics Iowa State University Ames,IA 50011-0001,USA tel.515-294-8150 [email protected] Inequalities,Ordinary Differential Equations 10) Sorin Gal Department of Mathematics University of Oradea Str.Armatei Romane 5 3700 Oradea,Romania [email protected] Approximation Th.,Fuzzyness,Complex Analysis 11) Jerome A.Goldstein Department of Mathematical Sciences The University of Memphis, Memphis,TN 38152,USA tel.901-678-2484 [email protected] Partial Differential Equations, Semigroups of Operators 12) Heiner H.Gonska Department of Mathematics University of Duisburg Duisburg,D-47048 Germany tel.0049-203-379-3542 office [email protected] Approximation Th.,Computer Aided Geometric Design 13) Dmitry Khavinson Department of Mathematical Sciences University of Arkansas Fayetteville,AR 72701,USA tel.(479)575-6331,fax(479)575-8630 [email protected] Potential Th.,Complex Analysis,Holomorphic PDE, Approximation Th.,Function Th. 14) Virginia S.Kiryakova Institute of Mathematics and Informatics Bulgarian Academy of Sciences Sofia 1090,Bulgaria [email protected] Special Functions,Integral Transforms, Fractional Calculus 15) Hans-Bernd Knoop

Institute of Mathematics Polish Academy of Sciences Chopina 12,87100 Torun, Poland [email protected] Mathematical Statistics,Probabilistic Inequalities 30) Bl. Sendov Institute of Mathematics and Informatics Bulgarian Academy of Sciences Sofia 1090,Bulgaria [email protected] Approximation Th.,Geometry of Polynomials, Image Compression 31) Igor Shevchuk Faculty of Mathematics and Mechanics National Taras Shevchenko University of Kyiv 252017 Kyiv UKRAINE [email protected] Approximation Theory 32) H.M.Srivastava Department of Mathematics and Statistics University of Victoria Victoria,British Columbia V8W 3P4 Canada tel.250-721-7455 office,250-477-6960 home, fax 250-721-8962 [email protected] Real and Complex Analysis,Fractional Calculus and Appl., Integral Equations and Transforms,Higher Transcendental Functions and Appl.,q-Series and q-Polynomials, Analytic Number Th. 33) Stevo Stevic Mathematical Institute of the Serbian Acad. of Science Knez Mihailova 35/I 11000 Beograd, Serbia [email protected]; [email protected] Complex Variables, Difference Equations, Approximation Th., Inequalities 34) Ferenc Szidarovszky Dept.Systems and Industrial Engineering The University of Arizona Engineering Building,111 PO.Box 210020 Tucson,AZ 85721-0020,USA [email protected] Numerical Methods,Game Th.,Dynamic Systems,

191

Institute of Mathematics Gerhard Mercator University D-47048 Duisburg Germany tel.0049-203-379-2676 [email protected] Approximation Theory,Interpolation 16) Jerry Koliha Dept. of Mathematics & Statistics University of Melbourne VIC 3010,Melbourne Australia [email protected] Inequalities,Operator Theory, Matrix Analysis,Generalized Inverses 17) Mustafa Kulenovic Department of Mathematics University of Rhode Island Kingston,RI 02881,USA [email protected] Differential and Difference Equations 18) Gerassimos Ladas Department of Mathematics University of Rhode Island Kingston,RI 02881,USA [email protected] Differential and Difference Equations 19) V. Lakshmikantham Department of Mathematical Sciences Florida Institute of Technology Melbourne, FL 32901 e-mail: [email protected] Ordinary and Partial Differential Equations, Hybrid Systems, Nonlinear Analysis 20) Rupert Lasser Institut fur Biomathematik & Biomertie,GSF -National Research Center for environment and health Ingolstaedter landstr.1 D-85764 Neuherberg,Germany [email protected] Orthogonal Polynomials,Fourier Analysis, Mathematical Biology

 

Multicriteria Decision making, Conflict Resolution,Applications in Economics and Natural Resources Management 35) Gancho Tachev Dept.of Mathematics Univ.of Architecture,Civil Eng. and Geodesy 1 Hr.Smirnenski blvd BG-1421 Sofia,Bulgaria [email protected] Approximation Theory 36) Manfred Tasche Department of Mathematics University of Rostock D-18051 Rostock Germany [email protected] Approximation Th.,Wavelet,Fourier Analysis, Numerical Methods,Signal Processing, Image Processing,Harmonic Analysis 37) Chris P.Tsokos Department of Mathematics University of South Florida 4202 E.Fowler Ave.,PHY 114 Tampa,FL 33620-5700,USA [email protected],[email protected] Stochastic Systems,Biomathematics, Environmental Systems,Reliability Th. 38) Lutz Volkmann Lehrstuhl II fuer Mathematik RWTH-Aachen Templergraben 55 D-52062 Aachen Germany [email protected] Complex Analysis,Combinatorics,Graph Theory

192

EDITOR’S NOTE  This special issue on “Applied Mathematics and Approximation Theory” contains  expanded versions of articles that were presented in the international conference  “Applied Mathematics and Approximation Theory 2008” ( AMAT 08), during   October 11‐13, 2008 at the University of Memphis, Memphis, Tennessee, USA.  All articles were refereed.        The organizer and Editor               George Anastassiou 

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 193-207, 2010, COPYRIGHT 2010 EUDOXUS PRESS, 193LLC

Approximation by Nonlinear Bernstein and Favard-Sz´asz-Mirakjan Operators of Max-Product Kind Barnab´as Bede1 and Sorin G. Gal2 1

Department of Mathematics, The University of Texas-Pan American, 1201 West University, Edinburg, Tx, 78539, USA E-mail: [email protected], [email protected] 2

Department of Mathematics and Computer Science, The University of Oradea, Universitatii 1, 410087, Oradea, Romania E-mail: [email protected] Abstract

We address in the present paper the following problem : is the linear structure the only one which allows us to construct approximation operators ? As an answer to this problem we propose new, so-called pseudolinear approximation operators, which are defined in a max-product algebra. We consider a nonlinear (max-product) Bernstein operator and a nonlinear (max-product) Favard-Sz´ asz-Mirakjan operator. We prove that the approximation errors given by these operators are similar to those of the corresponding linear operators provided by the classical Approximation Theory. Also, by some graphs we illustrate that the nonlinear operators can better reproduce the shape of some continuous functions (which are not differentiable or have abrupt changes of the values in some points) than their linear counterparts.

1

Introduction

The linear approximation operators provided by the classical Approximation Theory use exclusively as underlying algebraic structure the linear space structure of the reals. In the present paper we address the following problem : is the linear structure the only one which allows us to construct approximation operators ?

1

194

BEDE-GAL: MAX-PRODUCT APPROXIMATION

This problem was proposed in [1], [2] for Shepard operators. The findings in these two papers were that besides the linear structure we may opt for different structures as max-product, max-min (fuzzy) algebras or semirings with generated pseudo-operations. In this sense, Shepard-type operators of Max-Product and Max-Min kinds were studied together with operators of Shepard-type based on generated pseudo-operations. Note that these operators are nonlinear (they are so-called pseudo-linear). So, the topic of these papers and the present one partially fits on the area of Nonlinear Approximation, as it is described by e.g. [3], [5]. It is only a partial fit on this area, because while the operators are nonlinear, the algebraic structure underlying them is still a structure of linear space. In [3] and [5], the nonlinearity comes from the construction of approximations based on a dictionary of functions. Because of this reason, the method is not fully constructive. In contrast, the approach presented here is fully constructive and quite simple. Recently, the monograph [4] (see pp. 324-326, Open Problem 5.5.4) brings into attention the same question of constructing approximation operators with max-product or max-min operations, but in a more systematic way. Thus, following [4] we consider the nonlinear Bernstein operator of max-product kind n _

Bn(M ) (f )(x) =

pn,k (x)f

¡k¢ n

k=0 n _

, pn,k (x)

k=0

¡ ¢ with pn,k (x) = nk xk (1 − x)n−k and the nonlinear Favard-Sz´asz-Mirakjan operator of max-product kind ∞ _ (nx)k ¡ k ¢ k! f n

Fn(M ) (f )(x) =

k=0 ∞ _ (nx)k

.

k! k=0

Our findings are that these operators have very similar properties to the corresponding linear operators provided by the classical Approximation Theory. In this sense we show h ³that the ´i error estimates in approximation by these operators 1 √ is of order O ω1 f ; n . Some experimental results are also discussed and our finding is that the proposed max-product operators can better follow abrupt changes in the target function than their classical linear counterparts.

2

Nonlinear approximation operators

Over the set of positive reals, R+ , we consider the operations ∨ (maximum) and · product. Then (R+ , ∨, ·) has a semiring structure and we call it as Max-Product algebra. 2

BEDE-GAL: MAX-PRODUCT APPROXIMATION

Let I ⊂ R be a bounded or unbounded interval, and CB+ (I) = {f : I → R+ ; f continuous and bounded on I}. The general discrete form of a max-product approximation operator Ln : CB+ (I) → CB+ (I), is n _ Ln (f )(x) = Kn (x, xi ) · f (xi ), i=0

or Ln (f )(x) =

∞ _

Kn (x, xi ) · f (xi ),

i=0

where n ∈ N, f ∈ CB+ (I), Kn (·, xi ) ∈ CB+ (I) and xi ∈ I, for all i. These operators are nonlinear, positive operators and moreover they satisfy a pseudolinearity condition of the form Ln (α · f ∨ β · g)(x) = α · Ln (f )(x) ∨ β · Ln (g)(x), ∀α, β ∈ R+ , f, g : I → R+ . In this section we present some general results on positive nonlinear operators. These are used later in the study of nonlinear Bernstein and Favard-Sz´aszMirakjan operators of max-product kind. Lemma 1 Let I ⊂ R be a bounded or unbounded interval, CB+ (I) = {f : I → R+ ; f continuous and bounded on I}, and Ln : CB+ (I) → CB+ (I), n ∈ N be a sequence of operators satisfying the following properties : (i) if f, g ∈ CB+ (I) satisfy f ≤ g then Ln (f ) ≤ Ln (g) for all n ∈ N ; (ii) Ln (f + g) ≤ Ln (f ) + Ln (g) for all f, g ∈ CB+ (I). Then for all f, g ∈ CB+ (I), n ∈ N and x ∈ I we have |Ln (f )(x) − Ln (g)(x)| ≤ Ln (|f − g|)(x). Proof. Let f, g ∈ CB+ (I). We have f = f − g + g ≤ |f − g| + g, which by the conditions (i) − (ii) successively implies Ln (f )(x) ≤ Ln (|f − g|)(x) + Ln (g)(x), that is Ln (f )(x) − Ln (g)(x) ≤ Ln (|f − g|)(x). Writing now g = g − f + f ≤ |f − g| + f and applying the above reasonings, it follows Ln (g)(x) − Ln (f )(x) ≤ Ln (|f − g|)(x), which combined with the above inequality gives |Ln (f )(x) − Ln (g)(x)| ≤ Ln (|f − g|)(x). Remark 2 1) It is easy to see that our max-product operators satisfy the conditions in Lemma 1, (i), (ii). In fact, instead of (i) they satisfy the stronger condition Ln (f ∨ g)(x) = Ln (f )(x) ∨ Ln (g)(x), f, g ∈ CB+ (I). Indeed, taking in the above equality f ≤ g, f, g ∈ CB+ (I), it easily follows Ln (f )(x) ≤ Ln (g)(x). 2) In addition, it is immediate that the max-product operators are positive homogenous, that is Ln (λf ) = λLn (f ) for all λ ≥ 0. 3

195

196

BEDE-GAL: MAX-PRODUCT APPROXIMATION

Corollary 3 Let Ln : CB+ (I) → CB+ (I), n ∈ N be a sequence of operators satisfying the conditions (i)-(ii) in Lemma 1 and in addition being positive homogenous. Then for all f ∈ CB+ (I), n ∈ N and x ∈ I we have · ¸ 1 |f (x) − Ln (x)| ≤ Ln (ϕx )(x) + Ln (e0 )(x) ω1 (f ; δ)I + f (x) · |Ln (e0 )(x) − 1|, δ where δ > 0, e0 (t) = 1 for all t ∈ I, ϕx (t) = |t − x| for all t ∈ I, x ∈ I and ω1 (f ; δ)I = max{|f (x) − f (y)|; x, y ∈ I, |x − y| ≤ δ}. Proof. The proof is identical with that for positive linear operators. Indeed, from the identity Ln (f )(x) − f (x) = [Ln (f )(x) − f (x) · Ln (e0 )(x)] + f (x)[Ln (e0 )(x) − 1], it follows (by the positive homogeneity and by Lemma 1) |f (x) − Ln (f )(x)| ≤ |Ln (f (x))(x) − Ln (f (t))(x)| + |f (x)| · |Ln (e0 )(x) − 1| ≤ Ln (|f (t) − f (x)|)(x) + |f (x)| · |Ln (e0 )(x) − 1|. Now, since for all t, x ∈ I we have ·

¸ 1 |f (t) − f (x)| ≤ ω1 (f ; |t − x|)I ≤ |t − x| + 1 ω1 (f ; δ)I , δ replacing above we immediately obtain the estimate in the statement. An immediate consequence of Corollary 3 is the following. Corollary 4 Suppose that in addition to the conditions in Corollary 3, the sequence (Ln )n satisfies Ln (e0 ) = e0 , for all n ∈ N . Then for all f ∈ CB+ (I), n ∈ N and x ∈ I we have · ¸ 1 |f (x) − Ln (x)| ≤ 1 + Ln (ϕx )(x) ω1 (f ; δ)I . δ Remark 5 The max-product operators that we will construct in the following sections will satisfy the additional condition in Corollary 3, so that Corollary 3 one applies to these kind of operators.

3

Nonlinear Bernstein Operator

Let f : [0, 1] → R+ be continuous. We consider the following nonlinear Bernstein operator of max-product type n _

Bn(M ) (f )(x)

=

pn,k (x)f

¡k¢

k=0 n _ k=0

4

n

, pn,k (x)

(1)

BEDE-GAL: MAX-PRODUCT APPROXIMATION

with pn,k (x) =

µ ¶ n k x (1 − x)n−k . k

We provide an error estimate for this approximation operator in terms of the modulus of continuity. Theorem 6 The following pointwise estimate holds true for the nonlinear Bernstein operator à p ! x(1 − x) 1 (M ) √ |Bn (f )(x) − f (x)| ≤ Cω1 f, + , x ∈ [0, 1], n ∈ N, n n [0,1]

where C > 0 is an absolute constant independent of f, n and x. ω1 (f, δ)[0,1] = sup{|f (x) − f (y)|; x, y ∈ [0, 1], |x − y| ≤ δ}. Proof. It is easy to check that the max-product Bernstein operators fulfil the conditions in Corollary 4 and we have ¶ µ 1 |Bn(M ) (f )(x) − f (x)| ≤ 1 + Bn(M ) (ϕx )(x) ω1 (f, δn )[0,1] , δn where ϕx (t) = |t − x|. So, it is enough to estimate n _

Bn(M ) (ϕx )(x) =

¯ ¯ pn,k (x) ¯ nk − x¯

k=0 n _

. pn,k (x)

k=0

First we calculate

n _

pnk (x) for fixed n, x. Let En,k (x) = pn,k+1 (x)−pn,k (x),

k=0

k ∈ {0, ..., n − 1}. We have by successive calculations µ ¶ µ ¶ n n k En,k (x) = xk+1 (1 − x)n−k−1 − x (1 − x)n−k k+1 k µ ¶ n k nx − k − 1 + x = x (1 − x)n−k−1 . k k+1 We have nx − k − 1 + x ≥ 0 if and only if En,k (x) ≥ 0, i.e., pn,k+1 (x) ≥ pn,k (x). Further nx − k − 1 + x ≥ 0 if k ≤ nx − (1 − x), i.e., k ≤ [(n + 1)x] − 1. It follows n _ ¡ ¢ that pnk (x) = pn,r (x) = nr xr (1 − x)n−r , with r = [(n + 1)x]. k=0

Now we will estimate the ratio ¯ ¯ pn,k (x) ¯ nk − x¯ , r = [(n + 1)x]. pn,r (x) 5

197

198

BEDE-GAL: MAX-PRODUCT APPROXIMATION

We observe that nx < nx + x < nx + 1, so we have two cases: r = [nx] or r = [nx] + 1. We will present only the case r = [nx], similar result being true for r = [nx] + 1. We observe that r = [nx] implies nx − 1 ≤ r < nx, i.e., r r+1 0 for (r − k)2 > r + 1 −

r 2 +r n ,

i.e.,

q q 2 r − k > r + 1 − r n+r and this holds for k < r − (r+1)(n−r) . The maximal n term, therefore is given by # "r # " r (r + 1)(n − r) (r + 1)(n − r) +1=r− , k0 = r − n n

6

BEDE-GAL: MAX-PRODUCT APPROXIMATION

199

so we have Ankr ≤ Bn,k,r ≤ Bn,k0 ,r . Further, by Stirling formula we obtain √ √ µ ¶r−k0 2πr r 2π(n−r) r (n − r)n−r n−r r − k0 + 1 r e en−r √ Bn,k0 ,r ∼ √ 2π(n−k ) r n 0 2πk0 k0 k0 (n − k0 )n−k0 ek 0 en−k0 µ ¶k0 + 12 µ ¶n−k0 + 12 r n−r 1 = (r − k0 + 1) . k0 n − k0 n q Now, since k0 ∼ r − (r+1)(n−r) we get n Bn,k0 ,r

v u ∼u t

q r−

r (r+1)(n−r) n

Ãr

n−r q

(r+1)(n−r) n

n−r+

(r + 1)(n − r) +1 n

!

We observe that q r−

r (r+1)(n−r) n

and

n−r q

(r+1)(n−r) n

n−r+

q

= 1+

s

r

(r + 1) (2r + 1) ≥ 2 nr2 (n − r)

1 (r+1) nr 2 (n−r) (2r

+ 1) −

(r+1) nr

r n(n − r)

so we obtain q 1+

1 (r+1) nr 2 (n−r) (2r

+ 1) −

(r+1) nr



1+2

q

1 r n(n−r)



1 n

∼ O(1).

Finally, we obtain Ãr An,k,r ≤ C Since r = [nx] we get Ãr An,k,r ≤ C Case 2. If k > r then

(r + 1)(n − r) +1 n

(nx + 1)(n − nx) +1 n

!

1 ∼C n

!

Ãp

¯ ¯ ¯ ¯k ¯ − x¯ = k − x ≤ k − r . ¯ n ¯n n

We estimate the expression An,k,r

¯ ¯ pn,k (x) ¯ nk − x¯ = , k > r. pn,r (x) 7

1 . n

x(1 − x) 1 √ + n n

! .

1 . n

200

BEDE-GAL: MAX-PRODUCT APPROXIMATION

We have An,k,r

¯ ¯ ¡n¢ k µ ¶r−k − x)n−k ¯ nk − x¯ r!(n − r)! 1 − x k−r k x¡ (1 ¢ = ≤ . n r n−r k!(n − k)! x n r x (1 − x)

Since the function obtain

1−x x

An,k,r ≤

is decreasing we have by (2) r!(n − r)! k!(n − k)!

µ

r+1 n−r−1

¶k−r

1−x x



n−r−1 r+1 .

Further we

k−r = Br,k,n . n

We find, similar Case 1, the¸maximal term · to ·qBr,k,n for fixed ¸ n, r to be obtained q (r+1)(n−r) (r+1)(n−r) when k0 = r + +1 = r+ . Now we have k0 ∼ n n q r + (r+1)(n−r) and so, n Bn,k0 ,r

v u ∼u t

q r−

r (r+1)(n−r) n

Ãr

n−r q

(r+1)(n−r) n

n−r+

(r + 1)(n − r) +1 n

!

1 . n

Similar to case 1, finally we obtain Ãr ! (r + 1)(n − r) 1 An,k,r ≤ C +1 . n n and Ãr An,k,r ≤ C

(nx + 1)(n − nx) +1 n

!

1 ∼C n

Ãp

x(1 − x) 1 √ + n n

! .

Taking into account the estimates in Cases 1 and 2 we get Ãp ! x(1 − x) 1 C1 √ Bn(M ) (ϕx )(x) ≤ C + ≤√ . n n n √ Taking δn =

x(1−x) √ n

+

1 n

we obtain Ã

|Bn(M ) (f )(x)

− f (x)| ≤ Cω1

f,

p

x(1 − x) 1 √ + n n

! , [0,1]

which completes the proof. Remark 7 1) The error estimate shown by the above theorem is similar to the error estimate for the linear Bernstein operators. However, since obviously (M ) (M ) Bn (f )(0) = f (0) and Bn (f )(1) = f (1), it is clear that the term 1/n in the estimate of Theorem 6 could be dropped. 8

BEDE-GAL: MAX-PRODUCT APPROXIMATION

2) The proof of the above theorem evidently implies the uniform estimate µ ¶ 1 , n ∈ N. max |Bn(M ) (f )(x) − f (x)| ≤ C1 ω1 f, √ n [0,1] x∈[0,1] √ However, we conjecture that in fact the order of approximation O[ω1 (f ; 1/ n)] in Theorem 6 might be improved to the order O[ω1 (f ; ln(n)/n)].

4

Nonlinear Favard-Sz´ asz-Mirakjan Operator

Let f : [0, ∞) → R+ be continuous. We consider the following nonlinear FavardSz´asz- Mirakjan operator ∞ _ (nx)k ¡ k ¢ k! f n

Fn(M ) (f )(x) =

k=0 ∞ _ (nx)k

,

k! k=0

We provide an error estimate for this approximation operator in terms of the modulus of continuity similarly to the case of Bernstein operators. Theorem 8 The following pointwise estimate holds true for the nonlinear FavardSz´ asz-Mirakjan operator µ √ ¶ x (M ) |Fn (f )(x) − f (x)| ≤ Cω1 f, √ , x ≥ 0, n ∈ N, n [0,∞) where C > 0 is an absolute constant (that is independent of f , n and x) and ω1 (f, δ)[0,∞) = sup{|f (x) − f (y)|; x, y ≥ 0, |x − y| ≤ δ}. Proof. By Corollary 4, the error bound is controlled by the ratio ∞ _ ¯ ¯ (nx)k ¯ k ¯ k! n −x

Bn(M ) (ϕx )(x) =

k=0 ∞ _ (nx)k

.

k! k=0

First we calculate

∞ _ (nx)k k!

for fixed n ∈ N, x ∈ [0, ∞). We observe that

k=0

En,k (x) =

(nx)k+1 (nx)k (nx)k nx − k − 1 − = . (k + 1)! k! k! k+1

9

201

202

BEDE-GAL: MAX-PRODUCT APPROXIMATION

We have En,k (x) ≥ 0 if and only if k ≤ nx − 1. It follows that

∞ _ (nx)k k!

=

(nx)r r! ,

k=0

with r = [nx]. Now we estimate the ratio ∞ _ ¯ ¯ (nx)k ¯ k ¯ k! n −x

Rn,r (x) =

k=0 (nx)r r!

, r = [nx].

Case 1. If k ≤ r then we have ¯ ¯ ¯k ¯ ¯ − x¯ = x − k ≤ r − k + 1 . ¯n ¯ n n We estimate in what follows the expression ¯ ¯ (nx)k ¯ k ¯ k! n −x An,k,r = , k ≤ r. (nx)r r!

As in the proof of the estimate for the Bernstein operator, we have by (3) and (2) (nx)k r! r − k + 1 r! r − k + 1 An,k,r ≤ ≤ rk−r k! (nx)r n k! n r! Let Er,k = rk−r k! (r − k + 1). We have

r! r! (r − k) − rk−r (r − k + 1) (k + 1)! k! 2 r! (r − k) − r − 1 = rk−r k! k+1

Er,k+1 − Er,k = rk−r+1

It √ is easy to check √ that the maximal term Er,k is attained for k0 = [r + 1 − r + 1] = r − [ r + 1]. By Stirling formula we obtain √ r 2πr rer k0 −r Er,k ≤ Er,k0 = r k (r − k0 + 1) √ k 0 2πk0 ek00 √

1 √ ¢ ¡√ rr−[ r+1]+ 2 −[ r+1] √ = [ r + 1] + 1 . √ 1 e r+1]+ r−[ 2 (r − [ r + 1])

Further we have, √

Er,k

1 √ ¢ ¡√ rr− r+1+ 2 − r+1 √ = r+1+1 √ 1 e r− r+1+ 2 (r − r + 1) r r √ √ r r √ √ ∼ ( r + 1 + 1) = ( r + 1 + 1), r− r+1 r− r+1

10

BEDE-GAL: MAX-PRODUCT APPROXIMATION

and taking into account that r √ ¡√ ¢ ¡√ ¢ r √ ( r + 1 + 1) ≤ r+1+1 ≤ nx + 1 + 1 r− r+1 finally we obtain

√ An,k,r ≤

nx + 1 + 1 =O n

µ√ ¶ x √ . n

Case 2. If k > r then we have ¯

An,k,r =

(nx)k ¯ k k! n − (nx)r r!

¯ x¯

≤ (r + 1)

k−r

r! k − r . k! n

k−r

r! Further, if Er,k = (r + 1) k! (k − r), similar to Case 1, the maximal term is √ attained for k0 = r + [ r + 1]. By Stirling formula we have by successive calculations r ¢ ¡√ r √ r+1+1 . Er,k ≤ Er,k0 ∼ r+ r+1 q ¡√ ¢ √ Further we have r+√rr+1 ( r + 1 + 1) ≤ nx + 1 + 1 and finally we obtain

√ An,k,r ≤

nx + 1 + 1 =O n

µ√ ¶ x √ , k > r. n

Taking into account the estimates in cases 1 and 2 we get µ√ ¶ x (M ) Bn (ϕx )(x) = O √ . n Taking δn =

√ √x n

+

1 n

we obtain

|Fn(M ) (f )(x)

µ √ ¶ x − f (x)| ≤ Cω1 f, √ . n [0,∞)

Remark 9 1) The error estimate shown in this theorem is similar to that for the linear Favard-Sz´ asz-Mirkjan operator. 2) The pointwise error estimate in the above theorem obviously implies the uniform estimate µ ¶ 1 max |Fn(M ) (f )(x) − f (x)| ≤ Cω1 f, √ , n [0,∞) x∈[0,a] for any a > √ 0. However, we conjecture that in fact the order of approximation O[ω1 (f ; 1/ n)] in Theorem 8 might be improved to the order O[ω1 (f ; ln(n)/n)]. 11

203

204

BEDE-GAL: MAX-PRODUCT APPROXIMATION

3) It is interesting to note that the same estimate could be obtained for the truncated version of the Favard-Sz´ asz-Mirakjan operator n _ (nx)k ¡ k ¢ k! f n

Fn(M ) (f )(x) =

k=0 n _ (nx)k

, x ∈ [0, ∞), n ∈ N.

k! k=0

5

Applications

First it is worth noting that from computational point of view, the nonlinear operators in the previous sections require a less amount of computation than their linear counterpart, since the computation of ”max” is faster than that of the ”sum”. Also, if we take into account that the maximal term in the denominator is found explicitly in the proofs of the proposed theorems, then this does not require an extra loop for computation. In what follows, we illustrate by two concrete examples another possible usefulness of the proposed nonlinear operators. Thus, let us consider the functions f1 , f2 : [0, 1] → R+ f1 (x) = 2 + sin and

  f2 (x) =

1 if 10x − 3 if  2 if

1 x + 0.1 0 ≤ x ≤ 0.4 0.4 < x ≤ 0.5 . 0.5 < x ≤ 1

In Figs. 1 and 2 we compare the linear and nonlinear Bernstein and FavardSz´asz-Mirakjan operators approximating the function f1 . We observe that the nonlinear operators of either Bernstein or Favard-Sz´aszMirakjan type are able to better follow the peaks of the original function. In Figs. 3 and 4 we compare the linear and nonlinear Bernstein and FavardSz´asz-Mirakjan operators associated with the function f2 . Remark 10 It is clear that in contrast to their linear counterparts, due to the ”max” operator, these nonlinear operators in general do not preserve the smoothness of the function f (x), that is, even if f has continuous derivative, (M ) (M ) Bn (f )(x) and Fn (f )(x) fail to be differentiable at some points. However, as the graphs in the case of the function f2 (x) show, for the approximation of functions which are not differentiable at some points, these operators seem to be more suitable than their linear counterparts.

12

BEDE-GAL: MAX-PRODUCT APPROXIMATION

205

3 2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 1: Comparison of Bernstein operators. Solid line: f1 , dotted line: linear case, dashed line: nonlinear case.

3 2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2: Comparison of Favard-Sz´asz-Mirakjan operators. Solid line: f1 , dotted line: linear case, dashed line: nonlinear case.

13

206

BEDE-GAL: MAX-PRODUCT APPROXIMATION

2

1.8

1.6

1.4

1.2

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 3: Comparison of Bernstein operators. Solid line: f2 , dotted line: linear case, dashed line: nonlinear case.

2

1.8

1.6

1.4

1.2

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 4: Comparison of Favard-Sz´asz-Mirakjan operators. Solid line: f2 , dotted line: linear case, dashed line: nonlinear case.

14

BEDE-GAL: MAX-PRODUCT APPROXIMATION

References [1] B. Bede, H. Nobuhara, J. Fodor, K. Hirota, Max-product Shepard approximation operators, Journal of Advanced Computational Intelligence and Intelligent Informatics, 10(2006), 494–497. [2] B. Bede, H. Nobuhara, M. Daˇ nkov´a, A. Di Nola, Approximation by pseudolinear operators, Fuzzy Sets and Systems 159(2008) 804 – 820. [3] R.A. DeVore, V.N. Temlyakov, Nonlinear approximation in finitedimensional spaces, J. of Complexity, 13(1997), 489–508. [4] S.G. Gal, Shape-Preserving Approximation by Real and Complex Polynomials, Birkh¨auser, Boston-Basel-Berlin, 2008. [5] A. Hofinger, Nonlinear function approximation : Computing smooth solutions with an adaptive greedy algorithm, Journal of Approximation Theory, 143(2006) 159–175.

15

207

JOURNAL 208 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 208-215, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

L-APPROXIMATION TO NON-PERIODIC FUNCTIONS MICHAEL I. GANZBURG

Department of Mathematics, Hampton University, Hampton, Virginia 23668, E-mail: [email protected] Abstract. Some problems of finding exact values of the errors Aσ (f )L(R) of best approximation by entire functions of exponential type in integral metrics are discussed. In particular, we prove generalized Markov- and Nagy-type theorems and apply them to find Aσ (f )L(R) for some even and odd functions, such as |x|λ , sgn(x) |x|λ , and xn log |x|. KEY WORDS: Best approximation, entire functions of exponential type.

1. Introduction In this paper we discuss some problems of finding exact values of the errors of best approximation by entire functions of exponential type in the integral metric. R Let L(R) be the Banach space of all functions f with the finite norm ||f ||L(R) := |f (x)dx and let Bσ be the class of all entire functions of exponential type σ > R 0. We define the error of best approximation by functions from Bσ to a locally integrable function f : R → R by Aσ (f )L(R) := inf ||f − gσ ||L(R) . gσ ∈Bσ

The Fourier transform of a function or a tempered distribution f is denoted by F(f ); similarly, the cos-transform of an even function or a tempered distribution f is denoted by Fc (f ) and the sin-transform of an odd function or a tempered distribution f is denoted by Fs (f ). In particular for f ∈ L(R), Z Z Fc (f )(t) := f (x) cos tx dx, Fs (f )(t) := f (x) sin tx dx, R ZR F(f )(t) := f (x) exp(itx) dx. R

Study of the problem of finding Aσ (f )L(R) for some continuous functions f was initiated by M. Krein [13] (see also [1, Sec. 87]) in 1938 who proved the following result: Theorem 1.1. Let a continuous function f satisfy the inequality |f (x)| ≤ C(1 + x2 )−1 , x ∈ R, and let there exist a number α ∈ [0, π/σ] and an entire function gσ ∈ Bσ ∩ L(R) such that the following Mσ -condition holds: the product sin[σ(x − α)](f (x) − gσ (x)) does not change its sign for all x ∈ R. Then gσ is an entire 1

209

2

MICHAEL I. GANZBURG

function of best approximation to f in L(R) and ¯Z ¯ ¯ ¯ ¯ Aσ (f )L(R) = ¯ f (x) sgn sin[σ(x − α)]dx¯¯ R ¯∞ ¯ ¯ 4 ¯¯X 1 ¯ = Im(e−iα(2k+1)σ F(f )((2k + 1)σ))¯ . ¯ ¯ π¯ 2k + 1 k=0

Note that the Mσ -condition of Theorem 1.1 means that gσ interpolates f at equidistant points of R and the difference f − gσ changes its sign at these and only these points. We also remark that α cannot be chosen arbitrarily in Theorem 1.1 since α is a solution of the equation ∞ X

(−1)k f (α + kπ/σ) = 0.

k=−∞

In 1939 Nagy [16] (see also [1, Sec. 88]) found classes of odd and even functions on R that satisfy the Mσ -condition. Theorem 1.2. (a) Let f be an even function from L(R) such that its cos-transform Fc (f ) is twice differentiable on R, thrice differentiable on [σ, ∞), and for t > σ, dFc (f )(t) d2 Fc (f )(t) ≤ 0, ≥ 0, dt dt2 Then f satisfies Mσ -condition with α = π/(2σ) and Fc (f )(t) > 0,

Aσ (f )L(R) =

d3 Fc (f )(t) ≥ 0. dt3

∞ 4X Fc (f )((2k + 1)σ) (−1)k . π 2k + 1

(1.1)

k=0

(b) Let f be an odd function from L(R) such that its sin-transform Fs (f ) is twice differentiable on R and for t > σ, dFs (f )(t) ≤ 0, dt Then f satisfies Mσ -condition with α = 0 and Fs (f )(t) > 0,

Aσ (f )L(R) =

d2 Fs (f )(t) ≥ 0. dt2

∞ 4 X Fs (f )((2k + 1)σ) . π 2k + 1

(1.2)

k=0

Theorems 1.1 and 1.2 have been used in approximation theory for finding exact constants of approximation on convolution classes (see [1, 20, 12]). Since 1985 Vaaler and his students have published several papers on best approximation to some locally integrable functions. Their research has been influenced by applications of these results to some problems of number theory. In particular, Vaaler [21, Th. 4] established the relation Aσ (sgn x)L(R) = π/σ.

(1.3)

Littmann [14, Th. 6.2] generalized this result by proving the relation Aσ (sgn(x) xn )L(R) = Since

P∞

k=0 (2k

∞ 8 n! X (−1)kn , πσ n+1 (2k + 1)n+2 k=0

+ 1)−2 = π 2 /8, (1.3) follows from (1.4).

n = 0, 1, . . .

(1.4)

210

L-APPROXIMATION TO NON-PERIODIC FUNCTIONS

3

In recent paper [4, Sec. 1] Carneiro and Vaaler established the following results: ∞

Aπ (|x|λ )L(R) =

8Γ(λ + 1) sin(πλ/2) X (−1)k , λ+2 π (2k + 1)λ+2

|λ| ≤ 1, (1.5)

k=0

Aσ (log |x|)L(R) =

∞ 4 X (−1)k . σ (2k + 1)2

(1.6)

k=0

Note that Theorems 1.1 and 1.2 cannot be applied to this functions since all of them do not belong to L(R). In this paper we extend Theorems 1.1 and 1.2 to locally integrable functions on R and show that relations (1.3) through (1.6) are easy corollaries of these results. Other examples are discussed as well. 2. Krein- and Nagy-type Theorems for Locally Integrable Functions We first discuss a more general version of Theorem 1.1. Theorem 2.1. Let f be a locally integrable function on R and let there exist a number α ∈ [0, π/σ] and an entire function gσ ∈ Bσ such that f − gσ ∈ L(R) and the following Mσ∗ -condition holds: the product sin[σ(x − α)](f (x) − gσ (x)) does not change its sign for a. a. x ∈ R. Then gσ is an entire function of best approximation to f in L(R) and ¯Z ¯ ¯ ¯ Aσ (f )L(R) = ¯¯ (f (x) − gσ (x)) sgn sin[σ(x − α)]dx¯¯ . R

Proof. If g ∈ Bσ ∩ L(R), then F(g)(t) is continuous on R. Therefore, by the PaleyWiener theorem [19, Th. 7.2.1], F(g)(t) = 0 for |t| ≥ σ. Next using properties of Fourier sums of functions of bounded variation over (0, 2π) (see [22, Th. 2.8.1] and [1, Sec. 53]), we have n

X sin[(2k + 1)σ(x − α)] 4 lim = sgn sin[σ(x − α)], π n→∞ 2k + 1

x ∈ R,

k=0

and

¯ n ¯ ¯ 4 X sin[(2k + 1)σ(x − α)] ¯ ¯ ¯ sup sup ¯ ¯ < ∞. ¯ ¯ π 2k + 1 x∈R n≥0 k=0

Then by the Lebesgue dominated convergence theorem, for any g ∈ L(R) we have ¯Z ¯ Z ¯ ¯ ¯ |f (x) − gσ (x) − g(x)|dx ≥ ¯ (f (x) − gσ (x) − g(x)) sgn sin[σ(x − α)]dx¯¯ R R ¯ ¯ Z n ¯ X sin[(2k + 1)σ(x − α)] ¯¯ 4 ¯ (f (x) − gσ (x) − g(x)) dx¯ = ¯ lim ¯ ¯n→∞ π R 2k + 1 k=0 ¯ ¯ Z n ¯ X sin[(2k + 1)σ(x − α)] ¯¯ 4 ¯ = ¯ lim (f (x) − gσ (x)) dx¯ ¯n→∞ π R ¯ 2k + 1 k=0 ¯Z ¯ Z ¯ ¯ = ¯¯ (f (x) − gσ (x)) sgn sin[σ(x − α)]dx¯¯ = |f (x) − gσ (x)|dx. R

This proves Theorem 2.1.

R

¤

211

4

MICHAEL I. GANZBURG

Remark 2.1. Note that Theorem 2.1 strengthens Theorem 1.1 even in the case f ∈ L(R) since we replace a condition |f (x)| ≤ C(1 + x2 )−1 by a weaker condition f ∈ L(R). In addition, note that the proof of Theorem 2.1 is similar to that of Theorem 1.1 (cf. [1, Sec. 87]). We remark also that a different version of Theorem 2.1 was proved earlier by the author [5]. Next we prove a Nagy-type theorem for locally integrable functions. Theorem 2.2. (a) Let f be an even locally integrable function on R, which is a tempered distribution, and let for some σ0 ≥ 0 the restriction to (−∞, −σ0 )∪(σ0 , ∞) of the cos-transform Fc (f ) of the tempered distribution f be a thrice differentiable function. If Fc (f ) satisfies the following conditions for t > σ0 : Fc (f )(t) > 0,

d2 Fc (f )(t) ≥ 0, dt2

dFc (f )(t) ≤ 0, dt

d3 Fc (f )(t) ≥ 0, dt3

(2.1)

and lim Fc (f )(t) = 0,

(2.2)

t→∞

then f satisfies Mσ∗ -condition with α = π/(2σ) and σ > σ0 . In addition, Aσ (f )L(R) =

∞ Fc (f )((2k + 1)σ) 4X (−1)k . π 2k + 1 k=0

(b) Let f be an odd locally integrable function on R, which is a tempered distribution, and let for some σ0 ≥ 0 the restriction to (−∞, −σ0 ) ∪ (σ0 , ∞) of the sin-transform Fs (f ) of the tempered distribution f be a twice differentiable function. If Fs (f ) satisfies the following conditions for t > σ0 : Fs (f )(t) > 0,

dFs (f )(t) ≤ 0, dt

d2 Fs (f )(t) ≥ 0, dt2

(2.3)

and lim Fs (f )(t) = 0,

(2.4)

t→∞

then f satisfies Mσ∗ -condition with α = 0 and σ > σ0 . In addition, Aσ (f )L(R) =

∞ 4 X Fs (f )((2k + 1)σ) . π 2k + 1 k=0

Proof. (a) Let σ > σ0 . We first find a function hσ ∈ Bσ such that f − hσ ∈ L(R). Setting τ := (σ0 + σ)/2, we extend Fc (f )(t) from (−∞, −τ ] ∪ [τ, ∞) to R by the formula ½ Fc (f )(t), |t| ≥ τ F (t) := (2.5) P4 (t), |t| < τ, where P4 (t) :=

τ F (2) − F (1) 4 −τ F (2) + 3F (1) 2 τ 2 F (2) − 5τ F (1) + 8F (0) t + t + 8τ 3 4τ 8

is the Hermite polynomial satisfying the relations ¯ d(s) [Fc (f )(t)] ¯¯ (s) P4 (±τ ) = F (s) (±τ ) := , ¯ dts t=±τ

s = 0, 1, 2.

212

L-APPROXIMATION TO NON-PERIODIC FUNCTIONS

5

Then F is an even and bounded function on R. Moreover, it is a twice differentiable function on R and a thrice differentiable function on [τ, ∞), which satisfies the conditions F (t) > 0,

F 0 (t) ≤ 0,

F 00 (t) ≥ 0,

F 000 (t) ≤ 0,

t > τ,

(2.6)

and lim F (t) = 0.

(2.7)

t→∞

Next, it follows from (2.5) and (2.6) that F 0 ∈ L(R) and F 00 ∈ L(R). Hence integrating by parts and taking account of (2.7), we have for x 6= 0 Z A Fc (F )(x) = lim F (t) cos xt dt A→∞ −A à ! ¯A Z F (t) sin xt ¯¯ 1 A 0 = lim F (t) sin xt dt ¯ −x A→∞ x −A −A Z Z 1 1 = − F 0 (t) sin xt dt = 2 F 00 (t) cos xt dt. x R x R Hence Fc (F )(x) exists for every x 6= 0 and |Fc (F )(x)| ≤ Cx−2 ,

x 6= 0.

(2.8)

Further setting ϕ(x) := (2π)−1 Fc (F )(x), we shall show that hσ := f − ϕ belongs to Bσ . Indeed, ϕ is a tempered distribution since it is the cos-transform of a bounded continuous function (2π)−1 F (t) on R. Hence Fc (ϕ) = F.

(2.9)

Then hσ is an even tempered distribution, and its cos-transform Fc (hσ ) is defined as the functional (hσ , Fc (ψ)), where ψ is an even rapidly decreasing function from the Schwartz class S. Therefore, if ψ ∈ S and ψ = 0 on [−τ1 , τ1 ], where τ1 ∈ (τ, σ), then by (2.5) and (2.9), Z (hσ , Fc (ψ)) = (f, Fc (ψ)) − (ϕ, Fc (ψ)) = (Fc (f )(t) − F (t))ψ(t) dt = 0. |t|≥τ1

Thus the support of Fc (hσ ) is a subset of [−σ, σ]. Using now the generalized PaleyWiener theorem [19, Th. 7.2.3], we arrive at hσ ∈ Bσ . Next, f (x) − hσ (x) = (2π)−1 Fc (F )(x), so (2.8) and (2.10) imply Z |f (x) − hσ (x)| dx

Z ≤

R

Z |f (x)| dx +

|x|≤1

Z +

(2.10)

|hσ )(x)| dx |x|≤1

|f (x) − hσ (x)| dx < ∞. |x|>1

Thus ϕ = f − hσ ∈ L(R). Moreover, since ϕ ∈ L(R), identity (2.9) holds not only in the distributional sense but also as a formula for the cos-transform of an integrable function. Hence taking account of (2.5), (2.6), and (2.9), we conclude that ϕ satisfies all the conditions of Theorem 1.2(a). Therefore, ϕ satisfies Mσ condition for α = π/(2σ), that is, there exists Gσ ∈ Bσ ∩ L(R) such that the function cos σx[f (x) − hσ (x) − Gσ (x)] does not change its sign on R. Therefore,

213

6

MICHAEL I. GANZBURG

we conclude that f satisfies Mσ∗ -condition for α = π/(2σ) and gσ := hσ + Gσ . Moreover, by (1.1), (2.5), and (2.9), Aσ (f )L(R) = Aσ (f − hσ )L(R)

= =

∞ 4X Fc (f − hσ )((2k + 1)σ) (−1)k π 2k + 1

4 π

k=0 ∞ X

(−1)k

k=0

Fc (f )((2k + 1)σ) . 2k + 1

This proves statement (a) of Theorem 2.2. (b) The proof of this statement is similar to that of Theorem 2.2(a) if we replace Fc (f ) by Fs (f ) and P4 (t) by the polynomial τ 2 F (2) − 3τ F (1) + 3F (0) 5 −τ 2 F (2) + 5τ F (1) − 5F (0) 3 t + t 8τ 5 4τ 3 τ 2 F (2) − 7τ F (1) + 15F (0) + t. 8τ Therefore, Theorem 2.2 is established. P5 (t)

:=

¤

Remark 2.2. Note that Theorem 2.2 strengthens Theorem 1.2 even in the case f ∈ L(R) since we drop the conditions that Fc (f ) and Fs (f ) should be twice differentiable on R. We remark also that a special case of Theorem 2.2 for σ0 = 0 was proved earlier by the author [5]. 3. Examples Example 3.1. fn,sgn (x) := sgn(x) xn , n = 0, 1, 2 . . . Then the Fourier transforms of fn,sgn are given by the formulas [3, Sec. 10.1, #6] Fc (fn,sgn )(t) = 2n!(−1)(n+1)/2 t−(n+1) , Fs (fn,sgn )(t) = 2n!(−1)

n/2 −(n+1)

t

,

t > 0,

t > 0,

n = 1, 3, 5, . . . ,

n = 0, 2, 4, . . .

(3.1)

Since for odd n ≥ 1, (−1)(n+1)/2 fn,sgn satisfies all the conditions of Theorem 2.2(a) and for even n ≥ 0, (−1)n/2 fn,sgn satisfies all the conditions of Theorem 2.2(b), we arrive at (1.4). Example 3.2. fλ (x) := |x|λ , λ ∈ (−1, ∞), λ 6= 0, 2, 4, . . . Then by [3, Sec. 10.1, #11], Fc (fλ )(t) = −2 sin(λπ/2)Γ(λ + 1) t−(λ+1) ,

t > 0.

Since − sin(λπ/2)fλ satisfies the conditions of Theorem 2.2(a), we obtain ∞

Aσ (fλ )L(R) =

8| sin(λπ/2)|Γ(λ + 1) X (−1)k . λ+1 πσ (2k + 1)λ+2

(3.2)

k=0

Remark 3.1. The first direct proof of this result was given in [5]. Equality (3.2) can be also obtained as a corollary of a limit theorem for polynomial approximations in the integral metric [18] and L-approximation asymptotic results by Nikolskii [17] and Bernstein [2] (see [6, 7, 15, 10] for more details). A different proof of (3.2) for |λ| ≤ 1 was given in [4].

214

L-APPROXIMATION TO NON-PERIODIC FUNCTIONS

7

Example 3.3. fλ,sgn (x) := sgn(x) |x|λ , λ ∈ (−1, ∞), λ 6= 1, 3, 5, . . . Then by [3, Sec. 10.1, #12], Fs (fλ,sgn )(t) = 2 cos(λπ/2)Γ(λ + 1) t−(λ+1) ,

t > 0.

Since cos(λπ/2)fλ satisfies the conditions of Theorem 2.2(b), we obtain ∞

Aσ (fλ,sgn )L(R) =

8| cos(λπ/2)|Γ(λ + 1) X 1 . λ+1 πσ (2k + 1)λ+2

(3.3)

k=0

Example 3.4. fn,log (x) := xn log |x|, n = 0, 1, 2 . . . Then by [3, Sec. 10.1, #s 22,23], Fc (fn,log )(t) =

π(−1)n/2 n! t−(n+1) ,

Fs (fn,log )(t) =

(n+1)/2

π(−1)

n! t

t > σ0 ≥ 0,

−(n+1)

,

n = 0, 2, 4, . . .

t > σ0 ≥ 0,

n = 1, 3, 5, . . .

Using Theorem 2.2, we get Aσ (fn,log )L(R) Aσ (fn,log )L(R)

= =

∞ 4n! X (−1)k n+1 σ (2k + 1)n+2

n = 0, 2, 4, . . . .

4n! σ n+1

n = 1, 3, 5, . . .

k=0 ∞ X k=0

1 (2k + 1)n+2

(3.4)

A different proof of (3.4) for n=0 was given in [4]. A two-dimensional version of (3.4) was established in [8]. ∗ Example 3.5. fβ,arc := (2/π) arctan(x/β), β > 0.

We first note that ∗ )(t) := 2t−1 e−βt , Fs (fβ,arc

t > 0.

Indeed, using (3.1) and [11, Sec. 4.57], we have ∗ Fs (f1,arc )(t) = Fs (sgn x)(t) − (2/π)Fs (arctan x)(t) = 2/t − (2/t)(1 − e−t ) = 2t−1 e−t . ∗ Since fβ,arc satisfies the conditions of Theorem 2.2(b), we obtain ∗ Aσ (fβ,arc )L(R) =

∞ 8 X 1 . 2 eβσ(2k+1) πσ (2k + 1) k=0

∗ Note that for all x ∈ R, limβ→0+ fβ,arc (x) = sgn x and ∗ lim Aσ (fβ,arc )L(R) = Aσ (sgn x)L(R) = π/σ.

β→0+

Remark 3.2. Similarly to Examples 3.1-3.5, we can find Aσ (f )L(R) for functions such as sgn (x) xn log |x|, |x|λ log |x|, sgn(x) |x|λ log |x|, λ 6= 0, 1, 2, . . ., as well as more general functions of the form |x|λ logk |x| and sgn |x|λ logk |x|. Note that some of these results were recently discussed in [9, pp. 94, 95].

215

8

MICHAEL I. GANZBURG

References [1] N.I. Akhiezer, Lectures on the Theory of Approximation, (2nd ed.), Nauka, Moscow, 1965. [Russian] [2] S.N. Bernstein, On the best approximation of |x − c|p , in Collected Works, Vol II, Akad. Nauk SSSR, Moscow, 1954, pp.273–280. [Russian] [3] Yu.A. Brychkov and A.P. Prudnikov, Integral Transforms of Generalized Functions, Gordon and Breach Science Publishers, New York, 1989. [4] E. Carneiro and J.D. Vaaler, Some extremal functions in Fourier analysis, III, arXiv:0809.4053v1 [math.CA] 23 Sep 2008. [5] M.I. Ganzburg, Criteria for best approximation of locally integrable functions in L(R), in Studies in Current Problems of Summation and Approximation of Functions and their Applications, Dnepropetrovsk Gos. University, Dnepropetrovsk, 1983, pp. 11–16. [Russian] [6] M.I. Ganzburg, Limit theorems and best constants of approximation theory, in Handbook on Analytic-Computational Methods in Applied Mathematics (G.A. Anastassiou, ed.), CRC Press, Boca Raton, FL, 2000, pp.507–569. [7] M.I. Ganzburg, The Bernstein constant and polynomial interpolation at the Chebyshev nodes, J. Approx. Theory, 119,193–213(2002). [8] M.I. Ganzburg, Best constants of harmonic approximation on classes associated with the Laplace operator, J. Approx. Theory, 150,199–213(2008). [9] M.I. Ganzburg, Limit Theorems of Polynomial Approximation with Exponential Weights, Memoirs of AMS, 897, American Mathematical Society, Providence, RI, 2008. [10] M.I. Ganzburg and D.S. Lubinsky, Best approximating entire functions to |x|α in L2 , Contemporary Math., 455,93–107(2008). [11] I.S. Gradshteyn and I.M. Ryzhik, Tables of Integrals, Series and Products, Academic Press, San Diego, 1980. [12] N.P. Korneichuk, Exact Constants in Approximation Theory, Cambridge University Press, Cambridge, 1991. [13] M.G. Krein, On the best approximation of continuous differentiable functions on the whole real axis, Dokl. Akad. Nauk SSSR, 18,615–624(1938). [Russian] [14] F. Littman, Entire approximations to the truncated powers, Constr. Approx, 22,273– 295(2005). [15] D.S. Lubinsky, Series representations for best approximating entire functions of exponential type, in Proceedings of the International Conference on the Interactions between Wavelets and Splines, Athens, GA, Nashboro Press, Brentwood, TN, 2006, pp. 356-364. ¨ [16] B. Sz.-Nagy, Uber gewisse Extremalfragen bei transformierten trigonometrischen Entwicklungen. II. Nichtperiodischer Fall, Berichte Acad. d. Wiss., Leipzig, 91. [17] S.M. Nikolskii, On the best mean approximation by polynomials of the functions |x − c|s , Izvestia Akad. Nauk SSSR, 11,139–180(1947). [Russian] [18] R.A. Ratsin, S. N. Bernstein limit theorem for the best approximation in the mean and some of its applications, Izv. Vysch. Uchebn. Zaved. Mat, 12,81-86(1968). [19] R.S. Strichartz, A Guide to Distribution Theory and Fourier Transforms, CRC Press, Boca Raton, FL, 1994. [20] A.F. Timan, Theory of Approximation of Functions of a Real Variable, MacMillan, New York, 1963. [21] J. Vaaler, Some extremal functions in Fourier analysis, Bull. Amer. Math. Soc., 12,183– 216(1985). [22] A. Zygmund, Trigonometric Series (2nd ed.), Vol. I, Cambridge University Press, Cambridge, 1959.

JOURNAL 216 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 216-235, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Inequalities for Self-Reciprocal Polynomials and Uniformly Almost Periodic Functions N. K. Govil Department of Mathematics and Statistics Auburn University Auburn, AL 36849 U. S. A. E-mail: [email protected] Q. M. Tariq Department of Mathematics & Computer Science Virginia State University Petersburg, VA 23806 U. S. A. E-mail: [email protected] Abstract If p(z) is a polynomial of degree n and p0 (z) its derivative, then it is well known that max |p0 (z)| ≤ n max |p(z)|.

|z|=1

|z|=1

Also, for an entire function f (z) of exponential type τ , it was proved by S. N. Bernstein that sup

−∞ n/ 2, implying that if p(z) only √ belongs to Πn , the bound in (2.10) should be something greater than n/ 2. Aziz [1] considered another subclass of Πn and proved Pn ν THEOREM 2.9 Let p(z) = ν=0 (αν + iβν )z , αν ≥ 0, βν ≥ 0, ν = 0, 1, 2, . . . n be a polynomial belonging to Πn . Then n max |p0 (z)| ≤ √ max |p(z)|. |z|=1 2 |z|=1

(2.11)

The equality in (2.11) again holds for the polynomial p(z) = z n + 2iz n/2 + 1, n being even. As is easy to observe, the hypothesis of Theorem 2.9 is equivalent to P that p(z) belongs to Πn and that all the coefficients of p(z) = nν=0 aν z ν lie in the first quadrant of the complex plane. In fact, if all the coefficients of a polynomial p(z) belonging to Πn lie in a sector of opening π/2, say in, ψ ≤ arg z ≤ ψ + π/2, for some real ψ, then the polynomial P (z) = e−iψ p(z) belongs to Πn and has all its coefficients lying in the first quadrant of the complex plane. Since max|z|=1 |P (z)| = max|z|=1 |p(z)| and max|z|=1 |P 0 (z)| = max|z|=1 |p0 (z)| we may apply Theorem 2.9 to P (z) to get that if p(z) ∈ Πn and has all its coefficients lying in a sector of opening at most π/2, then also (2.11) holds. The following result that is equivalent to this statement appears in Jain [34].

223

8

Self-Reciprocal Polynomials

P THEOREM 2.10 Let p(z) = nν=0 aν z ν where aν = αν eiφ +βν eiψ , αν ≥ 0, βν ≥ 0, 1, 2, ..., n, 0 ≤ |φ − ψ| ≤ π/2, be a polynomial of degree n. If further p(z) ∈ Πn , then n max |p0 (z)| ≤ √ max |p(z)|. |z|=1 2 |z|=1 The result is best possible with equality holding for the polynomial p(z) = z n + 2iz n/2 + 1, n being an even integer. Later, Datt and Govil [12] proved the following result which can yield both the above results Theorem 2.9 and Theorem 2.10. P THEOREM 2.11 Let p(z) = nν=0 (αν + iβν )z ν be a polynomial of dePn ν gree P n belonging to Πn . If on |z| = 1, the maximum of | ν=0 αν z | and | nν=0 βν z ν | is attained at the same point, then n max |p0 (z)| ≤ √ max |p(z)|. |z|=1 2 |z|=1 The equality here holds again for p(z) = z n + 2iz n/2 + 1, n being an even integer. Govil and Vetterlein [32] obtained a bound for max|z|=1 |p0 (z)| which depends on the opening of the sector containing all the coefficients of a selfreciprocal polynomial and includes as special cases Theorem 2.9 and 2.10. Further, their result is applicable even when the opening of the sector is greater than π/2. More precisely, their result is P THEOREM 2.12 Let p(z) = nν=0 aν z ν is a polynomial belonging to Πn , with its coefficients lying in a sector of opening γ with vertex at the origin where 0 ≤ γ ≤ 2π/3, then n max |p(z)|. max |p0 (z)| ≤ 2 cos(γ/2) |z|=1 |z|=1 The result is best possible with equality holding for the polynomial p(z) = z n + 2iz n/2 + 1, n being an even integer. Rahman and Tariq [44] observed that in the above theorem a sharp estimate of max|z|=1 |p0 (z)|, that is valid for γ in [0, π) instead of [0, 2π/3], can be given in terms of |p(1)| and that the proof given by Govil and Vetterlein [32] can be easily modified to prove the following P THEOREM 2.13 Let p(z) = nν=0 aν z ν be a polynomial belonging to Πn , with its coefficients lying in a sector of opening γ with vertex at the origin where 0 ≤ γ < π, then n max |p0 (z)| ≤ |p(1)|. (2.12) 2 cos(γ/2) |z|=1

224

N.K. Govil, et al.,

9

In the case where n is even, the polynomial p(z) := z n + 2 eiγ z n/2 + 1 shows that the above inequality is sharp for any γ ∈ [0, π). Although the class Πn of polynomials has been extensively studied among others by Frappier and Rahman [20] and Frappier, Rahman and Ruscheweyh [21], the problem of obtaining a sharp inequality analogous to Bernstein’s inequality (2.1) is still open for polynomials of degree n ≥ 3. However, the following sharp inequality in the reverse direction, which is easy to obtain, is due to Dewan and Govil [14]. THEOREM 2.14 If p(z) is a polynomial belonging to Πn , then max |p0 (z)| ≥ |z|=1

n max |p(z)|. 2 |z|=1

(2.13)

The result is best possible and the equality holds for p(z) = (z n + 1). It may be remarked that several of the above mentioned results for polynomials have been extended to entire functions of exponential type, and this, we take up in the next section.

3

Entire functions of exponential type and uniformly almost periodic functions

In this section we will discuss the extension of some of the results on polynomials mentioned in Section 2 to entire functions of exponential type, and of Theorem 2.13 to entire functions of exponential type that are uniformly almost periodic on the real line. We start with the following definitions. Let f be an entire function and r be any positive real number. We will denote M (r) = Mf (r) := max |f (z)|. |z|=r

The order (or the order of growth) of an entire function f , denoted by ρ, is defined by log log M (r) . ρ := lim sup log r r→∞ It is a convention to take the order of a constant function of modulus less than or equal to one as 0. An entire function of finite order ρ is said to have type T , where T is given by T := lim sup r→∞

log M (r) . rρ

225

10

Self-Reciprocal Polynomials

DEFINITION 1 An entire function f is said to be of exponential type τ if for every ε > 0 there is a constant k(ε) such that |f (z)| ≤ k(ε) e(τ +ε)|z| for all z ∈ C. It is clear that entire functions of order less than 1 are of exponential type τ , where τ can be taken to be any number greater than or equal to 0. Also entire functions of order 1 and type T ≤ τ are of exponential type τ . Examples of entire functions of exponential type includes polynomials with complex coefficients, sin τ z, cos τ z etc. DEFINITION 2 Let f be an entire function of exponential type. The function log |f (reiθ )| hf (θ) := lim sup , 0 ≤ θ < 2π. (3.1) r r→∞ is called the indicator function of f . It describes the growth of the function along a ray {z| arg z = θ}. It is finite or −∞. Unless hf (θ) ≡ −∞, it is a continuous function of θ. Bernstein himself (see [3], p. 102) found the extension of inequality (2.1) for the entire functions of exponential type that are bounded on the real line. He in fact proved THEOREM 3.1 Let f be an entire function of exponential type τ , bounded on the real axis. Then sup

−∞ 1, then Ψ−rectifiable curve can be non-rectifiable. Let z =Pξ(x) be a mapping of segment I = [0, 1] onto the curve Γ. Then n σΨ (Pτ ) := j=1 Ψ(|ξ(xj+1 ) − ξ(xj )|), where {xj } is increasing sequence of points on PnI. A function ξ(x) is called function with bounded ψ− variation if the sums j=1 Ψ(|ξ(xj+1 ) − ξ(xj )|) are bounded (see [15, 11]). Hence, Γ is Ψ−rectifiable if and only if the mapping ξ has bounded Ψ−variation. This fact enables us to apply L.C. Young’s theory of Stieltjes integral (see [15, 11]). As shown in [8], the Cauchy integral in the Stielties form Z 1 f (ζ)d log (ζ − z) 2πi Γ exists by virtue of the Young theorem [15] if f has bounded Θ−variation and P ∞ 1 1 n=1 ψ( n )θ( n ) < ∞; here Θ is a function satisfying all our assumption on the function Ψ, and θ is its inverse function. In particular, if Γ is Ψ−rectifiable and

290

POLYGONAL APPROXIMATIONS OF NON-RECTIFIABLE CURVES AND THE JUMP PROBLEM 7

f ∈ Hν (Γ), then f has bounded Θ−variation for Θ(x) = Ψ(( hx )1/ν ), h = hν (f, Γ). In this case θ(x) = hψ ν (x), and the Cauchy integral exists in the Stieltjes sense if µ ¶ ∞ X 1 1+ν ψ < ∞. (9) n n=1 Moreover, there is valid the following result. Lemma 2.7[8] Let Γ be Ψ−rectifiable curve, where the function Ψ is convex and satisfies the condition (9). If a function f ∈ Hν (Γ) has an extension F ∈ Hν (C) and F has first first partial derivatives which are locally integrable on the complex plane, then there is valid the Cauchy–Green formula Z ZZ 1 1 ∂F (ζ) dζdζ . f (ζ)d log (ζ − z) = χ(D+ , z)F (z) + 2πi Γ 2πi + ∂ζ ζ − z D Our double Whitney extension f E satisfies assumptions of this lemma. Thus, we obtain from the last lemma and equality (7) Theorem 3. Let Γ be a Ψ−rectifiable curve, where the function Ψ is convex and satisfies the condition (9). If it has a monotone polygonal approximation satisfying condition (6), then the jump problem (1) has a solution (5) for any f ∈ Hν (Γ), and this solution is representable as the Cauchy integral Z 1 Φ∗ (z) = f (ζ)d log (ζ − z), 2πi Γ where integration is understood in the Stieltjes sense. 4. Examples and Commentary 4.1. We consider first so called von Koch snowflake. This well known self-similar SS ∞ S3·4n−1 fractal curve bounds domain D+ = T0 ( n=1 j=1 Tnj ), where T0 is regular triangle with unit sides and Tnj are similar regular triangles with sides 1/3n . We S SN S3·4n−1 + + put DN := T0 ( n=1 j=1 Tnj ), ΓN := ∂DN . Obviously, pre-fractals ΓN forms + + N increasing approximation of Γ. The difference ∆N = DN +1 \ DN consists of 3 · 4 −N −1 N triangles with side 3 √ . Hence, its perimeter is λN = (4/3) , and its width is ωN = c · 3−N , c = 3/9. The series (6) converges for ν > log3 2. The fractal dimension of the von Koch snowflake equals to log4 2 = 2 log3 2. Thus, for this curve the condition (6) for solvability of the jump problem coincides with (4). 4.2. The following example shows that the condition (6) can improve (4). Let β > 1. We put Kn = 2[nβ] , where square brackets mean entire part, and divide the segment [2−n , 2−n+1 ] of real axis into Kn equal parts of length αn = 2−n /Kn each one. We denote by xn,j the ends of these parts, i. e. xn,j = 2−n + jαn , j = 0, 1, . . . , 2[nβ] − 1, and consider vertical segments In,j = [xn,j , xn,j + i2−n ]. Then we fix a decreasing sequence of positive numbers εn such that εn < 21 αn , n = 1, 2, . . . , and consider mutually disjoint rectangles δn,j = {z = x + iy : xn,j < x < xn,j + εn , 0 < y < 2−n }. Let δ0 be square {z = x + iy : 0 < x < 1, 0 < y < 1} 2[nβ] −1 and D+ ≡ δ0 \ (∪∞ δn,j ), i. e. D+ is unit square with countable set of n=1 ∪j=0 rectangular cuts condensing to origin. We denote by Γ∗ the boundary of domain

291

8

B.A. KATS

D+ . It is non-rectifiable. As shown in [5, 6], Dm Γ∗ =

2β β+1

(10)

in the case εn = 12 αn . But the considerations of these papers keep correctness for εn < 12 αn , too. Thus, the equality (10) is valid under assumptions of the present example. + + 2[nβ] −1 The polygons Γ∗N = ∂DN , DN = δ0 \ (∪N δn,j ), N = 1, 2, . . . , form n=1 ∪j=0 [nβ]

decreasing approximation of Γ∗ . The difference ∆N = ∪2j=0 −1 δn,j here is union of finite family of rectangles, and values of its perimeter and width are evident. For instance, if εn = O(exp (−αn−1 )), then the series (6) converges for ν > 21 , what is ∗ essentially better than ν > Dm2 Γ . 4.3. A polygonal approximation can be constructed in terms of the Schauder series. Let z = ξ(x) be a continuous mapping of segment P∞ I = [0, 1] onto the curve Γ.We extend ξ(x) into the Schauder series ξ(x) = j=0 aj Ωj (x), where Ωj (x) are the Schauder functions. These functions are piecewise-linear (see, for instance, [1]). Pn Hence, a partial sum ξn (x) = j=0 aj Ωj (x) maps I onto a polygon Γn . But the Shauder series uniformly converges to ξ(x) (see [1]). Thus, the sequence Γ1 , Γ2 , . . . approximates the curve Γ (generally speaking, non-monotone one). Certain solvability conditions for the jump problem in terms of the Schauder coefficients are obtained in [9]. The research is supported by Russian Foundation for Basic Researches, grant 07-01-00166-a. References [1] Z. Ciesielski, Fractal functions and Schauder bases, Comp. Math. Appl., 30, 283–291(1995). [2] E.M. Dynkin, Smoothness of the Cauchy type integral, Zapiski nauchn. sem. Leningr. dep. mathem. inst. AN USSR, 92, 115–133 (1979). [3] I. Feder, Fractals, Mir Publishers, Moscow, 1991. [4] F.D. Gakhov, Boundary value problems, Nauka publishers, Moscow, 1977. [5] B.A. Kats, The Riemann boundary value problem on closed Jordan curve, Izv. VUZov, Mathematics, 4, 68–80(1983). [6] B.A. Kats, The Riemann boundary value problem on non-smooth arcs and fractal dimensions, Algebra and Analysis, St-Petersbourg, 6, 147-171(1994). [7] B. A. Kats, The refined metric dimension with applications, Computational Methods and Function Theory , 7, 77–89(2007). [8] B. A. Kats, The Cauchy integral over Non-rectifiable Paths, Contemporary Mathematics, 455, 183–196(2008). [9] B. A. Kats and A. Yu. Pogodina, The jump problem and the Faber–Schauder series, Izv. vuzov, Mathematics, 1, 16-22(2007). [10] A.N. Kolmogorov, V.M. Tikhomirov, ε−entropy and capasity of set in functional spaces, Uspekhi Math. Nauk, 14, 3–86(1959). [11] R. Lesniewicz, W. Orlicz, On generalized variation II, Studia Mathematica, XLV, 71– 109(1973). [12] N.I. Muskhelishvili, Singular integral equations, Nauka publishers, Moscow, 1962. [13] T. Salimov, A direct bound for the singular Cauchy integral along a closed curve, Nauchn. Trudy Min. vyssh. i sr. spec. obraz. Azerb. SSR, Baku, 5, 59–75(1979). [14] E.M. Stein, Singular integrals and differential properties of functions, Princeton University Press, Princeton, 1970. [15] L.C. Young, General inequalities for Stieltjes integrals and the convergence of Fourier series, Math. Annalen, 115, 581–612(1938).

JOURNAL 292 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 292-295, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

THE BOUNDARY VALUE PROBLEM FOR THE FOURTH-ORDER EQUATION WITH FRACTIONAL DERIVATIVES D.AMANOV, N.M. KUZIBAEV November 26, 2008 Institute of Mathematics and IT tehnologies , Tashkent, Uzbekistan. email:amanovd @rambler:ru. Keywords: fractional derivatives (Caputo derivative, Riemann-Liouville derivative), 2-parabolic equation, equation of transverse vibration of elastic rods, Bessel inequality, orthonormal completeness in L2 (0; p) system, Mittag-Le- er function, Volterra integral equation of the second kind, Lipschitz conditions, piecewise continuous.

1

The statement of the problem.

In the domain

= f(x; t) : 0 < x < p; 0 < t < T g let us consider the equation uxxxx +C D0t u = f (x; t)

(1)

where 1 < < 2, C D0t is Caputo fractional the th derivative operator with respect to t [1],[4]. If = 2 the equation (1) changes to the well-known equation of transverse vibration of elastic rods uxxxx + utt = f (x; t); And if

= 1 the equation (1) changes into 2parabolic equation uxxxx + ut = f (x; t):

Caputo fractional derivative operator expresses as Riemann-Liouville fractional integral [1] @2u 2 (2) C D0t = I0t @t2 Problem. In the domain = f(x; t) : 0 < x < p; 0 < t < T g …nd u(x; t) that is the solution of the equation (1) and is satisfying the following conditions @ 2m u(0; t) @ 2m u(p; t) = = 0; m = 0; 1; 0 2m @x @x2m 1

t

T;

(3)

AMANOV-KUZIBAEV:BVP-FRACTIONAL DERIVATIVES

u(x; 0) = '(x); ut (x; 0) = (x); 0

x

293

p:

(4)

(x;t) 2 Lip [0; p] is Theorem. If f (x; t) 2 C 1 [0; p]; f (0; t) = f (p; t) = 0; @f@x 0 uniformly in t, 0 < < 1. ' (x) ; (x) 2 C[0; p]; ' (x) ; 0 (x) are piecewise continuous on [0; p]; ' (0) = ' (p) = 0; (0) = (p) = 0, then there is a solution 2;1 4;2 of the problem in class Cx;t \ Cx;t ( ). Proof. We research the regular solution of the problem in the form of

u(x; t) =

1 X

un (t)Xn (x);

(5)

n=1

here Xn (x) =

r

2 sin p

n x;

n

=

n ; n 2 N: p

Substitute (5) into the equation (1) we get C D0;t un (t)

where fn (t) =

4 n un

+

Zp

(t) = fn (t)

(6)

f (x; t)Xn (x) dx

0

Conditions (4) change to the following conditions un (0) = 'n ; u0n (0) = here 'n =

Zp

'(x)Xn (x) dx;

n

=

0

Zp

n

(x)Xn (x) dx:

0

Using (2) we reduce equation (6) to the form 2 I0;t un (t) +

4 n un

(t) = fn (t)

Acting to both sides of this equation by the operator I0t we get Volterra integral equation of the second kind un (t) +

4 n

( )

Zt

(t

)

1

u n ( ) d = 'n + t

n

+ I0t fn (t)

0

Using the successive approximation method we …nd the unique solution un (t) = 'n E

;1

4 nt

+t

n E ;2

4 nt

Zt + (t 0

2

)

1

E

;

4 n

(t

)

fn ( ) d

294

AMANOV-KUZIBAEV:BVP-FRACTIONAL DERIVATIVES

here E

(z) =

;

1 X

k=0

zk (k + )

is Mittag-Le- er function[3]. Then the solution of the problem rewrites in the form of u(x; t) = +

1 P

1 P

Xn (x) 'n E

n=1

n=1

Rt Xn (x) (t

1

)

4 nt

;1

+t

4 nt

n E ;2

+ (7)

E

4 n

;

(t

)

fn ( ) d

0

It is easy to check that

1 P

lim u(x; t) =

t!0

lim @u t!0 @t

1 P

=

n=1

n=1

'n Xn (x) = '(x);

n Xn (x)

=

(x) :

If we show the uniform convergence of the series 1 P

uxxxx = +

n=1

1 P

4 n Xn

n=1

4 n Xn

(x) 'n E

Rt (x) (t

1

)

4 nt

;1

+t

4 nt

n E ;2

+ (8)

E

4 n

;

(t

)

fn ( ) d

0

then the series (7) and C D0t u 1 P

n=1

=

1 P

fn (t)Xn (x)

n=1 4 n Xn

1 P

n=1

Rt (x) (t

)

1

E

4 n Xn

(x) 'n E

4 n

;

(t

)

4 nt

;1

+t

n E ;2

4 nt

fn ( ) d

0

(9)

convergence uniformly, too. The series (8) we represent as the sum I1 + I2 + I3 , where I1 = I2 = I3 =

1 P

n=1 1 P n=1 1 P n=1

4 n 'n Xn (x)E ;1 4 n

4 nt 4 nt

n Xn (x)tE ;2

4 n Xn (x)

Rt

(t

;

)

1

E

; ;

4 n

(t

)

fn ( ) d :

0

Now we need the following estimate jE

;

(z)j

M ; jzj > 0; M = const > 0 1 + jzj

3

(10)

AMANOV-KUZIBAEV:BVP-FRACTIONAL DERIVATIVES

295

Using (10) we show the convergence of the I1 1 X

4 n

n=1

Let 0 < t0 < t

j'n j E

4 nt

;1

M

1 X

4 n

j'n j : 1 + 4n t n=1

T . In terms of the theorem we have 1 X

1 X j'n j < 1 + 4n t n=1 n=1 4 n

1 1 X j' j < 1 j'n j = 4 t0 n=1 n n t0 4 n

Analogously for I2 we get 1 X

4 n

n=1

j

nj t E

1 M X

4 nt

;2

1

t0

n=1

j

nj

< 1:

And now we show the convergence of theI3 . It majorizes by the following series 1 P

n=1

4 n

CM

= CM

Rt

(t

1 P

n=1 1 P n=1

1

)

0

1 1+ n

1 1+ n

Rt 0

E

4 n

;

4 n (t 4 (t n

)

1+

ln(2

4 nt

)

)

(t

)

d = CM CM

1 P

n=1

jfn ( )j d 1 P

n=1 1

1+ n

( 1) 1+ n

Rt d(1+ 0

ln 2T +

1+ 4 "

4 n (t 4 (t n

ln

" n

) )

)

=

;" > 0

Let 0 < " < . Then the above series converges uniformly. Theorem is proved. References 1. Samko S.G., Kilbas A.A., Marichev O.U. Integrals and derivatives of fractional order and their applications. Minsk, Nauka and tehnika. 1987. 688p. 2. Mikhailov V.P. On potential of parabolic equations. Soviet math.dokl.> 129, N6,1959, p.1226-1229. 3. Dzhrbashyan M.M. Integral transformations and representations of functions in complex domain. M., Nauka, 1966. 672p. 4. Goren‡o R., Luchko Y.F., Umarov S.R. on the Cauchy and multy-point problems for partial pseudo-di¤erential equations of fractional order// Fract.Calc.Appl.2000,V.3.

4

JOURNAL 296 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 296-327, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

On Sparse Solutions of Underdetermined Linear Systems Ming-Jun Lai Department of Mathematics the University of Georgia Athens, GA 30602 January 15, 2009 Abstract We first explain the research problem of finding the sparse solution of underdetermined linear systems with some applications. Then we explain three different approaches how to solve the sparse solution: the ℓ1 approach, the orthogonal greedy approach, and the ℓq approach with 0 < q ≤ 1. We mainly survey recent results and present some new or simplified proofs. In particular, we give a good reason why the orthogonal greedy algorithm converges and why it can be used to find the sparse solution. About the restricted isometry property (RIP) of matrices, we provide an elementary proof to a known result that the probability that the random matrix with iid Gaussian variables possesses the PIP is strictly positive.

1

The Research Problem

Given a matrix Φ of size m × n with m ≤ n, let Rk = {Φx, x ∈ Rn , kxk0 ≤ k} be the range of Φ of all the k-component vectors, where kxk0 stands for the number of the nonzero components of x. 1

LAI: UNDERDETERMINED SYSTEMS

297

Throughout this article, Φ is assumed to be of full rank. For a vector y ∈ Rk , we solve the following minimization problem min{kxk0 ,

x ∈ Rn , Φx = y}.

(1)

The solution of the above problem is called the sparse solution of y = Φx. It is clear that the above problem can be solved in a finite time. Indeed, write Φ = [φ1 , φ2 , · · · , φn ] with φi being a m × 1 vector. One can choose m columns, say A = [φi1 , · · · , φim ] from Φ to form a m × m linear system: Az = y. If A is nonsingular, one can find a solution z. By exhausting all m × m nonsingular submatrices from Φ and solving all such linear system of equations, one can see which solution has the smallest number of nonzero entries. n However, there could be Cm such nonsingular linear systems from Φx = y which need to be solved. For example, a rectangular matrix Φ with entries (xj )i , i = 0, · · · , m, j = 1, · · · , n for distinct real numbers xi ’s. Any m × m n sub-matrix from Φ is of full rank. The number Cm grows exponentially fast n as m and n go to ∞. For example, when n = 2m, Cm ≈ 2n . A common 512 case m = 512 and n = 1024 needs to solve at least 2 linear systems of size 512 × 512. This is impossible to do within a hour using current available computer. That is, the above method to solve Eq. (1) needs non-polynomial time. Are there any other methods to solve the above problem? Before we answer this question, let us see why we want to solve the problem in the next section. Remark 1.1 When m = n, Φx = y is a standard linear system and the solution is unique if Φ is of full rank. We have already known the Gaussian elimination method can be used to solve such linear systems. Remark 1.2 When m > n, one may not be able to have Φx = y. Instead, one asks to find x which minimizes the quantity kΦx − yk2 , where k · k2 is the discrete ℓ2 norm. This is a standard least squares problem. When Φ is not full rank, one usually solves the following minimal norm solution using standard least squares methods. That is, find x such that min{kxk2 ,

x ∈ SA ⊂ Rn , }.

(2)

and SA := {x ∈ Rn , kΦx − yk2 = min kΦz − yk2 }. z

2

(3)

298

LAI: UNDERDETERMINED SYSTEMS

The solution can be found by using the pseudo inverse or using the singular value decomposition. Remark 1.3 When each column of Φ is normalized to be 1, Φ is called a dictionary. When ΦΦT = Im with identity matrix Im of size m × m, Φ is called a tight frame. We shall use these two concepts in later sections.

2

Why do we find the sparse solutions?

In this section we give several reasons why we want to solve the sparse solution of underdetermined systems of linear equations.

2.1

Motivation: Signal and Image Compression

This is the most direct and natural application. Suppose that a signal or an image y is represented by using a tight frame Φ of size m × n with m < n. We look for a sparse approximation x satisfying min{kxk0 ,

x ∈ Rn , kΦx − yk ≤ θ},

(4)

where θ > 0 is a tolerance. In particular, for lossless compression, i.e., θ = 0, the above (4) is the our research problem (1).

2.2

Motivation: Compressed Sensing

We are interested in economically recording information about a vector x in Rn . First of all, we allocate m nonadaptive questions to ask about x. Each question takes the form of a linear functional applied to x. Thus, the information we obtain from the questions is given by y = Φx, where Φ is a matrix of m×n. In general, m is much smaller than n since x is a compressible data vector. Let ∆ be a decoder that provides an approximation x∗ to x using the information that y holds. That is, ∆y = x∗ ≈ x. Typically, the mapping ∆ is nonlinear. The central question of compressed sensing is 3

LAI: UNDERDETERMINED SYSTEMS

299

to find a good set of questions and a good decoder (Φ, ∆) so that we can find a good approximation x∗ of x. See, e.g. [Cand´es’06]. For example, when x ∈ Rn with kxk0 ≤ k k. That is, we have k < m < n. If C is chosen in the form ΦA−1 for some rectangular matrix Φ of size m×n, we need to solve the following minimization problem in order to record the data z economically. y = Cz = ΦA−1 Ax = Φx.

(6)

The problem is to find the sparse representation x satisfying the above (6) which is the same as (1).

2.3

Motivation: Error Correcting Codes

Let z be a vector encoded x by a redundant linear system A of size m × n with m > n. That is, z = Ax is transmitted through a noisy channel. The channel corrupts some random entries of z, resulting a new vector w = z + v. Finding the vector v is equivalent to correcting the errors. To this end, we extend A to a square matrix B of size m × m by adding ⊥ A , i.e, B = [A; A⊥ ]. Assume that A satisfies AT A = In , where In is the identity matrix of n. Then we can choose A⊥ such that BB T = Im the identity matrix of size m. Clearly,    T  x A v T T T B w =B z+B v = + . 0 (A⊥ )T v Let y = (A⊥ )T v which is the last m − n entries of B T w. Since z is in the codeword space V which is a linear span of columns of the matrix A, (A⊥ )T v 4

300

LAI: UNDERDETERMINED SYSTEMS

is not in the codeword space and is the only information about v available to the receiver. If the receiver is able to solve the minimization problem Eq.(1) with Φ = (A⊥ )T . That is, find the sparsest solution v such that y = (A⊥ )T v. Then we can get the correct x. Thus, this error correcting problem is again equivalent to the sparsest solution problem Eq. (1). See [Candes and Tao’05] and [Candes, Romberg, Tao’06] for more detail.

2.4

Motivation: Cryptography

Although large prime numbers are currently used for secure data transmission, it is possible to use underdetermined systems of linear equations instead. The ideas can be described as follows. Suppose that we have a class of matrices Φ of size m × n with m < n which admit a computationally efficient algorithm for solving the minimization Eq.(1) for any given y which is in the range Rk . Let Ψ be an invertible random matrix of size m × m and A = ΨΦ. Suppose that a receiver wants to get a secret data vector x from a customer, e.g., a vector consisting of credit card number, expiration date, and the name on the credit card. The receiver sends to the customer the matrix A in a public channel. After receiving A the customer computes z = Ax and sends z to the receiver in a public channel. As we mentioned above, finding the sparse solution x from z using matrix A is non-polynomial time. With overwhelming probability, such x can not be found by other parties. However, the receiver is able to get x by solving y = Ψ−1 z = Φx which is our research problem Eq. (1). By changing Ψ frequently enough, the receiver is able to get the secured data every time while the hacker is impossible to decode the data.

2.5

Motivation: Recovery of Loss Data

Let z be an image and e z be a partial image of z. That is, z loses some data to become e z. Suppose that we know the location where the data are lost. We would like to recover the original image from the partial image e z. Let Φ be a tight wavelet frame such that x = Φz is the most sparse representation for z. Let Ψ be the residual matrix from Φ by dropping off the columns corresponding to the unavailable entries, i.e., the missing data locations. Note that ΦT Φ = Im and hence, ΨT Ψ = Iℓ with ℓ < m. 5

LAI: UNDERDETERMINED SYSTEMS

301

It follows that ΨT x = ΨT Φz = e z by the orthonormality of columns of Φ. Thus, we need to find the sparsest solution x from the given e z such that e z = ΨT x which is exactly the same problem as our research problem(1). Once we have x, we can find z which is z = Φx. See [Aharon, Elad, Bruckstein’06] for numerical experiments.

3

The ℓ1 Approach

Although the problem in Eq. (1) needs a non-polynomial time to solve (cf. [Natarajan95]) in general, it can be much more effectively solved by using many other methods, e.g., ℓ1 minimization approach, reweighted ℓ1 method, OGA(orthogonal greedy algorithm), and the ℓq approach. Let us review these approaches in the following subsections and following sections. The ℓ1 minimization problem is the following min{kxk1 ,

x ∈ Rn , Φx = y},

(7)

Pn T where kxk1 = i=1 |xi | for x = (x1 , x2 , · · · , xn ) . The solution ∆1 Φx is called the ℓ1 solution of y = Φx. Since the ℓ1 minimization problem is equivalent to the linear programming, this converts the problem into a tractable computational problem. (See [Lai and Wenston’04] for a justification of the equivalence and a computational algorithm for ℓ1 minimization.) A matlab ℓ1 minimization program is available on-line. But one has to study when the (P1) solution (the solution of Eq. (7)) is also the (P0) solution (the solution of Eq. (1)). There are two concepts: mutual coherence(MC) and restricted isometric property (RIP) of the matrix Φ to help describe the situation.

3.1

The Mutual Coherence

Let us begin with the spark of matrix A, the smallest possible number σ such that there exists σ columns from A that are linearly dependent. It is clear that σ(A) ≤ rank(A) + 1. The following theorem is belong to [Donoho and Elad’03]. Theorem 3.1 A representation y = Φx is necessarily the sparsest possible if kxk0 < spark(Φ)/2. 6

302

LAI: UNDERDETERMINED SYSTEMS

Proof. Suppose that there are two sparse solutions x1 and x2 with kx k0 ≤ k and kx2 k0 ≤ k solving y = Φx. Then Φ(x1 − x2 ) = 0. So k(x1 − x2 )k0 ≤ 2k but, k(x1 − x2 )k0 ≥ spark(Φ). It follows that k ≥ spark(Φ)/2. Hence, when k < spark(Φ)/2, the sparsest solution is unique. That is, if one find a solution x of Φx = y with kxk0 < spark(Φ)/2, then x is the sparse solution. Next we introduce the concept of mutual coherence of matrix Φ. Assume that each column of Φ is normalized. That is, Φ is a dictionary. Let G = ΦT Φ which is a square matrix of size n × n. Write G = (gij )1≤i,j≤n , the mutual coherence of Φ is M = M(Φ) = max |gij |. 1

1≤i,j≤n i6=j

Clearly, M ≤ 1. We would like to have matrix Φ such that its mutual coherence M is as small as possible. However, M(Φ) can not be too small. We have Lemma 3.2 If n ≥ 2m, then M(Φ) ≥ (2m)−1/2 . Proof. Indeed, let λi , i = 1, · · · , n be eigenvalues of G. Since G is positive semi-definite, P all λi ≥ 0. Since the rank of G is equal to m, only m nonzero λi . Since i λi is equal to the trace of G which is n since gii = 1. That is, s X X √ n= λi ≤ m λ2i . (8) i

i

On the other hand, using a property of the Frobenius norm of G, we have X X λ2i = kGk2F = (gij )2 . (9) i

1≤i,j≤n

It follows from Eq. (8) and (9) that (n2 − n)M(Φ)2 + n ≥ That is, M(Φ) ≥

q

n−m . m(n−1)

X

(gij )2 ≥

1≤i,j≤n

n2 m

In particular, when n ≥ 2m, we have M(Φ) ≥

(2m)−1/2 . That is, M(Φ) ∈ ((2m)−1/2 , 1]. With M(Φ), we can prove the following (cf. [Donoho and Elad03]) 7

LAI: UNDERDETERMINED SYSTEMS

303

Theorem 3.3 Let Spark(Φ) be the spark of Φ and M(Φ) be the coherence of Φ. Then Spark(Φ) > 1/M(Φ). Next we need the following lemma. Lemma 3.4 Let k < 1/M + 1. For any S ⊂ {1, · · · , n} with #(S) ≤ k and ΦS be the matrix consisting of the k columns of Φ with column indices in S. Then the kth singular value of ΦS is bounded below by (1 − M(k − 1))1/2 and above by (1 + M(k − 1))1/2 . Proof. For any vector v ∈ Rn with support on S, we have X vT Gv = vS ΦTS ΦS vS = kvk2 + vi gij vj . i6=j i,j∈S

Since k

X

i6=j i,j∈S

vi gij vj k ≤ M ≤ M

X

i6=j i,j∈S

|vi vj |

X

i,j∈S

|vi vj | − kvk22

!

≤ Mkvk22 (k − 1), we have vS ΦTS ΦS vS ≥ kvk22 − M(k − 1)kvk22 = (1 − M(k − 1))kvk22 . Similarly, vT Gv = vS ΦTS ΦS vS = kvk2 + ≤

kvk22

+ M(k −

1)kvk22

X

vi gij vj

i6=j i,j∈S

= (1 + M(k − 1))kvk22 .

These complete the proof. We first show that if k < (1 + 1/M)/2 and for any y ∈ Rk , the sparse solution of Eq. (1) is unique. Lemma 3.5 Suppose k < (1 + 1/M)/2. For any y ∈ Rk , the sparse solution of Eq. (1) is unique. 8

304

LAI: UNDERDETERMINED SYSTEMS

Proof. Let y = Φx0 ∈ Rk . Let x1 be a solution of Eq. (1) with kx1 k0 ≤ k. Then kx0 − x1 k0 ≤ 2k. By Lemma 3.4, we have (1 − M(2k − 1))kx0 − x1 k22 ≤ kΦ(x0 − x1 )k22 = 0. Since 1 − M(2k − 1) 6= 0, it follows that x0 = x1 . In order to fully understand the computation of Eq. (7), we next introduce a variance of Eq. (7): solve the following minimization problem min{kxk1 ,

x ∈ Rn , kΦx − yk2 ≤ δ},

(10)

where y = Φx0 + z with kzk2 ≤ ǫ < δ and x0 ∈ Rn is a vector with k nonzero entries, that is, Φx0 ∈ Rk . Hence, we consider the case that y has some measurement error and compute a solution x within accuracy δ. Theorem 3.6 Let M be the mutual coherence of Φ. Suppose that k < (1/M + 1)/4. bǫ,δ be the solution of Eq. (10). Then For any x0 with kx0 k0 ≤ k, let x kb xǫ,δ − x0 k22 ≤

(ǫ + δ)2 . 1 − M(4k − 1)

bǫ,δ − x0 . Clearly, kb Proof. Write w = x xǫ,δ k1 = kw + x0 k1 ≤ kx0 k1 when computing the ℓ1 minimization. Let S ⊂ {1, 2,P · · · , n} be the P index set where x P0 is supported. P Since kw + x0 k1 ≥ kx0 k1 − i∈S |wi| + i∈Sˆ |wi |, we have |w | ≤ ˆ i i∈S i∈S |wi | or kwk1 ≤ 2

X i∈S

√ |wi| ≤ 2 kkwk2,

(11)

where Sˆ denotes the complement set of S in {1, 2, · · · , n}. On the other hand, kΦb xǫ,δ − yk2 ≤ δ and y = Φx0 + z imply that kΦw + zk2 ≤ δ. That is, kΦwk2 ≤ kΦw + zk2 + ǫ ≤ δ + ǫ. Finally, kΦwk22 = kwk22 + w T (G − I)w ≥ kwk22 − M(kwk21 − kwk22) ≥ (1 + M)kwk22 − M4kkwk22 by the estimate (11) above. It follows that kwk22 ≤

1 (ǫ + δ)2 kΦwk22 ≤ 1 + M − 4Mk 1 − M(4k − 1) 9

LAI: UNDERDETERMINED SYSTEMS

305

by the estimate in the previous paragraph. This concludes the result in this theorem. Next we look at the (ǫ, δ) variance of the (P0) problem: to solve the following minimization problem min{kxk0 ,

x ∈ Rn , kΦx − yk2 ≤ δ},

(12)

where y = Φx0 + z with kzk2 ≤ ǫ and Φx0 ∈ Rk . Then we can prove Theorem 3.7 Let M be the mutual coherence of Φ. Suppose that k < (1/M + 1)/2. eǫ,δ be the solution of Eq. (12). Then For any x0 with kx0 k0 ≤ k, let x ke xǫ,δ − x0 k22 ≤

(ǫ + δ)2 . 1 − M(2k − 1)

Proof. By Lemma 3.4, we have 1 kΦ(e xǫ,δ − x0 )k22 1 − M(2k − 1) 1 (ǫ + δ)2 kΦe xǫ,δ − y + zk22 ≤ . = 1 − M(2k − 1) 1 − M(2k − 1)

ke xǫ,δ − x0 k22 ≤

This completes the proof. Both theorems above were proved in [Donoho, Elad and Temlyakov’06]. In particular, the proof of Theorem 3.7 above is a much simplified version of the one in [Donoho, Elad and Temlyakov’06]. It is easy to see that there is a gap between the requirements of k. That is, one is to require k < (1+1/M)/4 by any ℓ1 method and the other is to require k < (1 + 1/M)/2 by an ℓ0 method. Thus, the ℓ1 method is not optimal yet. It is interesting to know how we can increase k when using the ℓ1 method.

3.2

RIP

Another approach is to use the so-called Restricted Isometry Property(RIP) of the matrix Φ. Letting 0 < k < m be an integer and AT be a submatrix of A

10

306

LAI: UNDERDETERMINED SYSTEMS

which consists of columns of A whose column indices are in T ⊂ {1, 2, · · · , n}, the k restricted isometry constant δk of A is the smallest quantity such that (1 − δk )kxk22 ≤ kAT xk22 ≤ (1 + δk )kxk22

(13)

for all subset T with #(T ) ≤ k. If a matrix A has such a constant δk > 0 for some k, A possesses RIP. With this concept, it is easy to see that if δ2k < 1, then the solution of Eq. (1) is unique. Indeed, if there were two solutions x1 and x2 such that Φ(x1 − x2 ) = 0, then we choose the index set T which contains the indices of the nonzero entries of x1 − x2 and see that #(T ) ≤ 2k which implies (1 − δ2k )kx1 − x2 k22 ≤ kΦT (x2 − x2 )k22 = 0. It follows that kx1 − x2 k2 = 0 when δ2k < 1. That is, the solution is unique. Furthermore, Theorem 3.8 ([Candes, Romberg, and Tao’06]) Suppose that k ≥ 1 such that δ3k + 3δ4k < 2 and let x ∈ Rn be a vector with kxk0 ≤ k. Then for y = Φx, the solution of Eq. (7) is unique and equal to x. This result is recently simplified slightly in the following way: Theorem 3.9 ([Candes’08]) Suppose that k ≥ 1 such that √ δ2k < 2 − 1. Let x ∈ Rn be a vector with kxk0 ≤ k. Then for y = Ax, the solution of Eq. (7) is unique and equal to x. In fact kx − x∗ k2 ≤

2(1 + ρ) 2 kAx − Ax∗ k2 + kx − x∗T k1 . 1−ρ 1−ρ

where x∗ is the (P1) solution (the solution of Eq. (7) and x∗T is the vector of 2k the k largest components of x∗ . Here ρ = √δ2−1 .

11

LAI: UNDERDETERMINED SYSTEMS

√ As we have already known that δ2k < 2 − 1 < 1 which implies that the sparse solution is unique. The above result mainly explains that the (P1) solution (the solution of Eq. (7)) is equal to the (P0) solution (the solution of Eq. (1)). The results are consequences of the following Theorem 5.2 and hence we omit the proofs of the above two theorems here. Let us discuss what kind of matrices Φ satisfies the RIP. So far there is no explicit construction of matrices of any size which possess the RIP. Instead, there are a couple of constructions based on random matrices which satisfy the RIP with overwhelming probability. In [Cand´es, Romberg, and Tao’06], the following results were proved using the measure concentration technique (cf. [Ledoux’01]). Theorem 3.10 Suppose that A = [aij ]1≤i≤m,1≤j≤n be a matrix with entries √ aij being iid Gaussian random variables with mean zero and variance 1/ m. Then the probability    n 2 2 2 2 P kΦxk2 − kxk2 ≤ ǫkxk2 ≥ 1 − (1 + 2/ǫ)k e−mǫ /c . (14) k

for any vector x ∈ Rn with kxk0 = k, where c > 2 is a constant and kxk0 denotes the number of nonzero entries of vector x.  2 Once we choose k < m such that nk (1 + 2/ǫ)k e−mǫ /c < 1 small enough, we will have a good probability to have a matrix satisfying the RIP. Indeed,  n since k ≤ (n/e)k ,   n 2 2 (1 + 2/ǫ)k e−mǫ /c ≤ e−mǫ /c+k ln(n/e)+k ln(1+2/ǫ) . k As long as m > kc ln(n(1 + 2/ǫ)/e)/ǫ2 , we have  P kΦxk22 − kxk22 ≤ ǫkxk22 > 0.

That is, a matrix with RIP can be found with positive probability. Theorem 3.10 can be simply proved based on the following theorem (cf. [Baranuik, Daveport, DeVore, Wakin’08]).

12

307

308

LAI: UNDERDETERMINED SYSTEMS

Theorem 3.11 Suppose that A = [aij ]1≤i≤m,1≤j≤n be a matrix with entries √ aij being iid Gaussian random variables with mean zero and variance 1/ m. Then for any ǫ > 0, the probability ǫ2 m ), P( kAxk22 − kxk22 < ǫkxk22 ) ≥ 1 − 2 exp(− c

(15)

where c is a positive constant independent of ǫ and kxk2 for any x ∈ Rn . In general, there are many other random matrices satisfying the above probability estimate. Typically, matrices with sub-Gaussian random variables possess the RIP. See [Mendelson, Pajor, and Tomczak-Jaegermann’07, ’08]. In addition to the measure concentration approach, there are several other ideas to prove the results in the above theorem. For example, [Pisier’86] and [Lai’08]. We refer to [Lai’08] for an elementary proof of Theorems 3.11 and 3.10 and similar theorems for sub-Gaussian random matrices. For convenience, we borrow the proof of Theorem 3.11 from [Lai’08] and present it in the Appendix for interested reader.

3.3

The re-weighted ℓ1 Method

The re-weighted ℓ1 minimization is the following iterations: (1) for k = 0, solve the standard ℓ1 problem: x ∈ Rn , Ax = y},

min{kxk1 ,

(16)

(2) for k > 0, find x(k) which solves the following weighted ℓ1 problem: min

n X |xi | i=1

wi

,

x ∈ Rn , Ax = y},

(k−1)

(17)

with wi = |xi | + ǫ for k = 1, 2, 3, · · · , n. This method is introduced in [Cand´es, Watkin, and Boyd’07]. The researchers gave some heuristic reasons that the algorithm above converges much faster than the standard ℓ1 method. It is still interesting why the method works better in theory.

13

LAI: UNDERDETERMINED SYSTEMS

4

The OGA Approach

There are many versions of the Optimal Greedy Algorithm(OGA) available in the literature. See [Temlyakov’00], [Temlyakov’03], [Tropp’04], and [Petukhov’06]. We mainly explain the optimal greedy algorithm (OGA) proposed by A. Petukhov in 2006 when Φ is obtained from a tight wavelet frame. That is, Φ is a matrix whose columns are frame components φi , i = 1, · · · , n satisfying ΦΦT = Im , where Im is the identity matrix of size m × m. It has two distinct advantages: (1) Iterative steps for the least squares solution and (2) more than one terms are chosen in each iteration. e be the Let Λ be an index set which is a subset of {1, 2, · · · , n} and Λ complement of Λ in {1, 2, · · · , n}. Also let PΛ be the diagonal matrix of size n × n with entries to be 1 if the index is in Λ and 0 otherwise. Suppose that we have a fixed index set Λ. We first introduce a computationally efficient algorithm for finding coefficients of the linear combiP nation fΛ = i∈Λ ai φi which is the least squares approximation of f , i.e., kf − fΛ k2 = min{kf − gk2 , g ∈ SΛ } where f ∈ Rm is a given vector in Rm and SΛ is the span of φi, i ∈ Λ. In general, fΛ can be computed directly by inverting a Gram matrix [hφi , φj i]i,j∈Λ . When m is large, it is more efficient to use the following algorithm to find an approximation of fΛ . Algorithm LSA (least squares approximation): Set k = 0, g 0 = f, f 0 = 0. For k ≥ 1, let g k = g k−1 −ΦPΛ ΦT g k−1 and f k = f k−1 +ΦPΛ ΦT g k−1 . Stop the iterations when g k − g k−1 is very small. We have the following Theorem 4.1 The sequence f k converges to fΛ in the following sense: kf k − fΛ k2 ≤ (1 − γ 2 )k/2 kfΛ k2 , where γ is the least non-zero singular value of the matrix ΦΛ . Proof. We rewrite g k as g k = gΛk +e g k , where gΛk is the best approximation of g k using the span of columns from ΦΛ . Clearly, gΛ0 = fΛ . For k ≥ 1, gΛk = gΛk−1 − ΦPΛ ΦT gΛk−1 since the best approximation operator in Rm is a linear operator. Similar for fΛk = fΛk−1 + ΦPΛ ΦT gΛk−1. Note that fΛk = f k for all k ≥ 0. We have f k + gΛk = fΛk + gΛk = fΛk−1 + gΛk−1 = · · · = fΛ0 + gΛ0 = fΛ . 14

309

310

LAI: UNDERDETERMINED SYSTEMS

It follows that kf k − fΛ k2 = kgΛk k2 = k(I − ΦPΛ ΦT )gΛk−1 k2 . Note that I − ΦPΛ ΦT = Φ(I − PΛ )ΦT and hence kI − ΦPΛ ΦT k2 ≤ kΦ(I − PΛ )ΦT k2 ≤ (1 − γ 2 )1/2 . Therefore, kf k − fΛ k2 = kgΛk k2 ≤ (1 − γ 2 )1/2 kgΛk−1k2 ≤ · · · ≤ (1 − γ 2 )k/2 kgΛ0 k2 = (1 − γ 2 )k/2 kfΛ k2 . This completes the proof. We are now ready to present the Petukhov version of orthogonal greedy algorithm (OGA). Algorithm OGA: Set Λ0 = ∅, g 0 = f, f 0 = 0. Choose a threshold r ∈ (0, 1] and a precision ǫ > 0; k−1 Step 1. For k ≥ 1, find Mk = maxi∈Λ , φi /kφiki|; and Let Λk = / k−1 |hg k−1 Λk−1 ∪ {i, |hg , φi /kφiki ≥ rMk }; Step 2. Apply Algorithm LSA above over Λk to approximate g k−1 to find fΛk and gΛk . Update f k = f k−1 + fΛk and g k = g k−1 − fΛk . Step 3. If kf − f k k2 ≤ ǫ, we stop the algorithm. Otherwise we advance k to k + 1 and go to Step 1. There is lack of theory to justify why the above OGA is convergent in the original paper [Petukhov’06] and in the literature so far. We now present an analysis of the convergence of the above OGA. Theorem 4.2 Suppose that Φ of size n × N has the RIP for order k with 1 ≤ k ≤ n. Then the above OGA converges. Proof. Without loss of generality we may assume that Λm = {1, 2, · · · , nm } for some nm < n, where m = 1, 2, · · · . Let Gm = [hφi , φj i]1≤i,j≤nm be the Grammian matrix. Define am ≤ kGm k2 ≤ bm 15

LAI: UNDERDETERMINED SYSTEMS

311

to be the smallest and largest eigenvalues of symmetric Gm . The RIP of Φ for integer nm implies that am > 0 for m = 1, 2, · · · , m0 with nm0 = n. T We first observe that the best approximation fΛm = Φ−1 m [hf, φi i]1≤i≤nm . Then due to the result in Theorem 4.1, let us for simplicity, assume that fΛm is the best approximation of Rm−1 (f ). We next note that for i ∈ Λm \Λm−1 , |hRm−1 (f ), φi/kφiki| ≥ rMm with Mm =

max |hRm−1 (f ), φi/kφiki| = max |hRm−1 (f ), φi/kφiki| i=1,··· ,n

i∈Λ / m−1 n X

≥ |

i=1

αi hRm−1 (f ), φii|

Pn Pn for any α such that |α | ≤ 1. Assume that f = i i i=1 i=1 ci φi with Pn i=1 |ci | ≤ 1 (with appropriate normalization). It follows that Mm ≥ |hRm−1 (f ), f i| = kRm−1 (f )k2.

Hence we have kRm (f )k2 = hRm−1 (f ) − fΛm , Rm−1 (f ) − fΛm i = kRm−1 (f )k2 − kfΛm k2 and T 2 kfΛm k2 = kΦ−1 m [hRm−1 (f ), φi ]i=1,··· ,nm k 1 ≥ 2 k [hRm−1 (f ), φi ]Ti=1,··· ,nm k2 am 1 ≥ 2 r 2 n2m kRm−1 (f )k2 . am

That is, kRm (f )k2 = kRm−1 (f )k2 − kfΛm k2 ≤ kRm−1 (f )k2 −

1 2 2 r nm kRm−1 (f )k2 . a2m

Summing the above inequality over m = 1, · · · , k, we get kRk (f )k2 ≤ kR0 (f )k2 −r 2

k k X X 1 2 1 2 nm kRm−1 (f )k2 ≤ kf k2 −r 2 nm kRk (f )k2 . a a m=1 m m=1 m

16

312

LAI: UNDERDETERMINED SYSTEMS

because of the monotonicity of kRm (f )k. In other words, (r 2

k X 1 2 nm + 1)kRk (f )k2 ≤ kf k2. a m=1 m

P As km=1 n2m diverges and am nonincreases, kRk (f )k has to converge to zero. This completes a proof of the convergence of this OGA. The OGA can be used to solve our research problem Eq. (1). For y ∈ Rk , the OGA algorithm uses the indices which are associated with the terms |hy, φii|, i ∈ {1, 2, · · · , n} which is ≥ r% of the largest value. As the size of Λi increases, it finds an approximation xOGA,ǫ such that ΦxOGA,ǫ is closed to y within the given ǫ. That is, kΦxOGA,ǫ − yk ≤ ǫ. We now explain why xOGA,ǫ is a good approximation of x. Due to the construction, the number of nonzero entries kxOGA,ǫ k0 = k ∗ 0 and hence xOGA,ǫ is away from x by ǫ/α2k . Next we need to show that k ∗ ≤ k may happen. Assume that each column of Φ is normalized. For y = Φx with x = (x1 , x2 , · · · , xn )T , without loss of generality, we may assume that the support of x is S = {1, 2, · · · , k}, |xk | = min{|xj | 6= 0, i = 1, · · · , n}, and |x1 | = kxk∞ . Suppose that 1 1 k≤ + , (18) 2M 2 where M = M(Φ) stands for the mutual coherence of Φ. Then we can claim that the support(xOGA,ǫ ) ⊂ support(x). Recall Φ = [φ1 , · · · , φn ] with φi

17

LAI: UNDERDETERMINED SYSTEMS

313

being the ith column of Φ. Let us first compute the inner products of y with φi ’s. k X |hy, φii| = |hΦx, φi i| = | hxj φj , φii|. j=1

and |

k X j=1

hxj )φj , φii| ≥ |hx1 φ1 , φii| −

k X j=2

|hxj φj , φii|.

In particular, we have |hy, φ1i| ≥ |x1 | − M(k − 1)|x2 |. and |hy, φii| ≤ |

k X j=1

hxj φj , φi i| ≤ |x1 |kM.

By our assumption in Eq. (18), we have |x1 | − M(k − 1)|x2 | ≥ |x1 | − M(k − 1)|x1 | ≥ |x1 |kM. it follows that |hy, φ1i| ≥ |hy, φii|,

∀i ≥ 2.

That is, the largest inner product is |hy, φ1i|. Furthermore, let us assume   1 |xk | k≤ + M or rk|x1 |M ≤ |xk | − M(k − 1)|x1 |, (1 + r)M |x1 |

(19)

where r is the positive constant r < 1 employed in the OGA. Then for 2 ≤ j ≤ k, |hy, φj i| ≥ |xj | − M(k − 1)|x1 | ≥ |xk | − M(k − 1)|x1 | and r|hy, φ1i| ≤ rk|x1 |M. It follows that |hy, φj i| ≥ r|hy, φ1i|. That is, the first greedy step in the above OGA picks up all the indices of the nonzero entries of x. In particular, when the nonzero entries of x are 1 in absolute value, the condition in Eq. (19) is simplified to k≤

1 (1 + M). (1 + r)M 18

314

LAI: UNDERDETERMINED SYSTEMS

That is, if k satisfies Eq. (18), then k satisfies Eq. (19). Under the condition in (18) or the conditions in (18) and (19), the OGA picks all the entries φ1 , · · · , φk . Hence, the support(xOGA,ǫ √ ) is∗ the same as the support of x. Furthermore, kxOGA,ǫ − xk1 ≤ kkx − xk2 . Since kΦ(xOGA,ǫ − x)k2 = kΦxOGA,ǫ − yk2 ≤ ǫ, we have ǫ2 ≥ = ≥ =

kΦ(xOGA,ǫ − x)k22 kxOGA,ǫ − xk2 + (xOGA,ǫ − x)T (G − I)(xOGA,ǫ − x) kxOGA,ǫ − xk22 − M(kxOGA,ǫ − xk21 − kxOGA,ǫ − xk22 ) (1 + M)kxOGA,ǫ − xk22 − MkkxOGA,ǫ − xk22 .

That is, kxOGA,ǫ − xk22 ≤

ǫ2 . 1 − M(k − 1)

That is, under the assumption that the sparsity of x is small, i.e., Eq. (18), xOGA,ǫ approximates the sparse solution x very well.

4.1

L1 Greedy Algorithm

Recently, Kozlov and Petukhov proposed a new greedy algorithm (cf. [Kozlov and Petukhov’08]). It is called L1 Greedy Algorithm. The algorithm starts with the solution of the ℓ1 minimization under the constraint Az0 = y. [1] Let z0 be the solution of the ℓ1 minimization under the constraint Az = y among z ∈ Rn . [2] Let M = kz0 k∞ . [3] For i = 1, · · · , N, let W ∈ Rn be a weighted vector with 1 in the all entries except for those entries which are 1/10000 when |zi−1 j | ≥ 0.8M, 1 ≤ j ≤ n. [4] Solve the weighted ℓ1 minimization problem min{

n X j=1

|zi |/wi , Az = y, z ∈ Rn }

and let zi be the solution.

19

LAI: UNDERDETERMINED SYSTEMS

315

[5] If zi is not yet a sparse solution, let M = 0.8M and return to Step 3. The algorithm works well for random matrix A of size 512×1024. It is interesting to give an analysis of the convergence or reasons why the algorithm works.

5

The ℓq Approach

Let kxkq = (

n X i=1

|xi |q )1/q

be the standard ℓ quasi-norm for 0 < q < 1. It is easy to see that lim kxkqq = q

q→0+

kxk0 . We can use minimization

kxkqq

to approximate kxk0 . Thus, we consider the following

min{kxkqq ,

x ∈ Rn , Φx = y}.

(20)

for 0 < q ≤ 1 as an approximation of the original research problem Eq. (1). A solution of the above minimization is denoted by ∆q Φx.

5.1

Recent Results on the ℓq Approach

The several ℓq methods were studied recently in [Chartrand’07], [Foucart and ¨ Yilmaz’08]. The first Lai’08], [Davies and Gribonval’08] and [R. Saab and O. piece of results is shown in [Chartrand’07] Theorem 5.1 Let q ∈ (0, 1]. Suppose that there exists a k > 1 such that the matrix Φ has RIP constant such that δks + k 2/q−1 δ(k+1)s < k 2/q−1 − 1. Then the solution of Eq. (20) is the sparest solution. One can see that this result is a generalization of Theorem 3.10. When q = 1 and k = 3, the above condition is the condition in Theorem 3.10. In fact, the proof is a generalization of the proof in [Cand´es, Romberg, and Tao’06] for ℓ1 norm to ℓq quasi-norm. In [Foucart and Lai’08], we felt that the non-homogeneity of the Restricted Isometry Property (13) contradicted the consistency of the problem with respect to measurement amplification, 20

316

LAI: UNDERDETERMINED SYSTEMS

or in other words, that it was in conflict with the equivalence of all the linear systems (c A)z = c y, c ∈ R. Instead, we introduce αk , βk ≥ 0 to be the best constants in the inequalities αk kzk2 ≤ kAzk2 ≤ βk kzk2 ,

kzk0 ≤ k.

Our results are to be stated in terms of a quantity invariant under the change A ← c A, namely β2s 2 γ2s := ≥ 1. α2s 2 In fact, αk2 = 1 − δk and βk2 = 1 + δk . We use this slightly modified version of RIP and work through the arguments of [Candes, Romberg, Tao’06] in terms of quasi-norm ℓq to get the following theorem. Our main result in this section is the following (see [Foucart and Lai’08] for a proof) Theorem 5.2 Given 0 < q ≤ 1, if  1/q−1/2 √ t γ2t − 1 < 4( 2 − 1) s

for some integer t ≥ s,

(21)

then every s-sparse vector is exactly recovered by solving Eq. (20). Corollary 5.3 Under the assumption that √ γ2s < 4 2 − 3 ≈ 2.6569,

(22)

every s-sparse vector is exactly recovered by solving (7). When q = 1, this result slightly improves Cand`es’ condition in Theorem 3.9, since the constant γ2s is expressed in terms of the Restricted Isometry Constant δ2s as 1 + δ2s γ2s = , 1 − δ2s √ hence the condition (22) becomes δ2s < 2(3 − 2)/7 ≈ 0.4531. The second special instance we are pointing out corresponds to the choice t = s + 1. In this case, Condition (21) reads 1/q−1/2  √ 1 γ2s+2 < 1 + 4( 2 − 1) 1 + . s The right-hand side of this inequality tends to infinity as q approaches zero. The following result is then straightforward. 21

LAI: UNDERDETERMINED SYSTEMS

317

Corollary 5.4 Under the assumption that γ2s+2 < +∞, every s-sparse vector is exactly recovered by solving (20) for some q > 0 small enough. The key point is to show for any v which is in the null space of Φ, i.e, Φv = 0, kvS kq < kvS¯ kq unless v = 0, where S stands for the index set of the nonzero entries of the solution x ∈ Rk , vS denotes the vector v restricted in S with other entries being zero and S¯ is the complement indices of S. This is indeed the case since for v = x − x∗ in the null space of Φ, kvS¯ kq ≤ kvS kq , where x∗ is the solution of Eq. (20) and x is the sparse vector supported on S satisfying Φx = y. Combining the above inequality, we have a contradiction that kvS kq < kvS kq unless v = 0. Thus, the solution of the minimization is the exact solution if we can show kvS kq < kvS¯ kq . This inequality was recognized in [Grinoval and Nielson’03]. The condition (21) in Theorem 5.2 implies this inequality.

5.2

More about the ℓq Approach

We first show that the minimization problem Eq. (20) has a solution for q > 0. That is, the existence of the solution is independent of the RIP of Φ. See [Foucart and Lai’08] for a proof. Theorem 5.5 Fix 0 < q < 1. There exists a solution ∆q Ax solving Eq. (20). We next consider the situation that the measurements y are imperfect. That is, y = Φx0 + e with unknown perturbation e which is bounded by a known amount kek2 ≤ θ. In this case we consider the following min{kxkqq ,

x ∈ Rn , kΦx − yk2 ≤ θ}.

(23)

A solution of the above minimization is denoted by ∆q,θ Φx. As in the previous section, we have Theorem 5.6 Fix 0 < q < 1 and θ > 0. There exists a solution ∆q,θ Φx solving Eq. (23). 22

318

LAI: UNDERDETERMINED SYSTEMS

In [Saab and Yilmaz’08], they extended the proof in [Candes’08] in the ℓq setting. They have Theorem 5.7 Let q ∈ (0, 1]. Suppose that δks + k 2/q−1 δ(k+1)s < k 2/q−1 − 1 for some k > 1 with kS ∈ Z+ . Let x∗ be the solution of Eq. (23). Then kx − x∗ kq2 ≤ C1 η p +

C2 ∆s (x)qq 1−q/2 s

for two positive constants C1 and C2 . Here, the quantity ∆k (x)q denotes the error of best k-term approximation to x with respect to the ℓq -quasinorm, that is ∆k (x)q := inf kx − zkq . kzk0 ≤k

The above theorem is an extension of Chartrand’s result (cf. Theorem 5.1). Next we state another main theoretical result of this survey. We refer to [Foucart and Lai’08] for a proof. Theorem 5.8 Given 0 < q ≤ 1, if Condition (21) holds, i.e. if  1/q−1/2 t γ2t − 1 < 4( 2 − 1) s √

for some integer t ≥ s,

(24)

then a solution x∗ of (Pq,θ ) approximate the original vector x with errors kx − x∗ kq ≤ C1 · σs (x)q + D1 · s1/q−1/2 · θ, σs (x)q kx − x∗ k2 ≤ C2 · 1/q−1/2 + D2 · θ. t

(25) (26)

The constants C1 , C2 , D1 , and D2 depend only on q, γ2t , and the ratio s/t. Comparison of the results in Theorems 5.8 and 5.7 is given in [Saab and Yilmaz’08]. It concludes that when k is around 2, the sufficient condition (24) is weaker while the condition in Theorem 5.7 is weaker when k > 2. Numerical experiemental results in [Foucart and Lai’08] show that the ℓq method is able to 100% recovery all the sparse vectors with sparsity is about s = m/2. That is, in order to have ks ≤ m, k is about 2. Next we consider that negative results discussed in [Davies and Gribnoval’08]. That is, when the ℓq method may fail. 23

LAI: UNDERDETERMINED SYSTEMS

319

Theorem 5.9 For any ǫ > 0, there exists √ an integer s and dictionary Φ with a restricted isometry constant δ2s ≤ 1/ 2 + ǫ for which ℓ1 method fails on some k sparse vector. √ Now the gap between the √ positive result δ2s = 2(3 − 2)/7 = 0.4531 and the negative result δ2s = 1/ 2 + ǫ = 0.7071 is about 0.2540. In general, Davies and Gribnoval consider a special matrix Φ which has a unit spectral norm, i.e., kΦyk2 kΦk2 = sup = 1. y6=0 kyk2 Then they define

σk2 (Φ) := min y∈ kyk0 ≤k

kΦyk2 kyk2

which is equal to αk2 in [Foucart and Lai’08]. Theorem 5.10 Fix 0 < q ≤ 1 and let 0 < ηq < 1 be the unique positive solution to 2 ηq2/q + 1 = (1 − ηp ). p For any ǫ > 0, there exist integers s ≥ 1, N ≥ 2s + 1 and a minimally redundant unit spectral norm tight frame ΦN −1×N with 2 σ2s (Φ) ≥ 1 −

2 ηq − ǫ 2−q

for which there exists an s-sparse vector which cannot be uniquely recovered by the ℓq method.

5.3

Numerical Computation of the ℓq Approach

The minimization problem (Pq ) suggested to recover x is nonconvex, Following [Foucart and Lai’08], we introduce an algorithm to compute a minimizer of the approximated problem, for which we give an informal but detailed justification. We shall proceed iteratively, starting from a vector z0 satisfying Az0 = y, which is a reasonable guess for x, and constructing a sequence (zn ) recursively by defining zn+1 as a solution of the minimization problem minimize z∈RN

N X i=1

|zi | (|zn,i| + ǫn )1−q 24

subject to Az = y.

(27)

320

LAI: UNDERDETERMINED SYSTEMS

Here, the sequence (ǫn ) is a nonincreasing sequence of positive numbers. It might be prescribed from the start or defined during the iterative process. In practice, we will take limn→∞ ǫn = 0. We shall now concentrate on convergence issues. We start with the following Proposition 5.11 For any nonincreasing sequence (ǫn ) of positive numbers and for any initial vector z0 satisfying Az0 = y, the sequence (zn ) defined by (27) admits a convergent subsequence. Similar to the proof of Theorem 5.5, we can see that the solution of the above minimization exists. We further show that the solution xǫ of Eq.(27) b be a solution will converge to the solution Eq. (20). For convenience, let x of Eq. (20). Theorem 5.12 Fix 0 < q ≤ 1. Let xǫ be the solution of Eq. (27). Then xǫ b as ǫ → 0+ . converges to x

The new minimization problem Eq. (27) can be solved using ℓ1 method since Fq,ǫ (x) is a weighted ℓ1 norm. Proposition 5.13 Given 0 < q < 1 and the original s-sparse vector x, there exists η > 0 such that, if ǫn < η

and

kzn − xk∞ < η

for some n,

(28)

then the algorithm 27 produces the exact solution. That is, zk = x

for all k > n.

The constant η depends only on q, x, and γ2s . Lemma 5.14 Given 0 < q ≤ 1 and an s-sparse vector x, if Condition (21) holds, i.e. if  1/q−1/2 √ t for some integer t ≥ s, γ2t − 1 < 4( 2 − 1) s then for any vector z satisfying Az = y, one has   kz − xkqq ≤ C kzkqq − kxkqq ,

for some constant C depending only on q, γ2t , and the ratio s/t. 25

LAI: UNDERDETERMINED SYSTEMS

321

Frequencies of Successfully Solving the Sparsest Solutions

1

Frequencies of Successes

0.8

0.6 l

1

rwl

1

OGA lq ROMP

0.4

0.2

0

10

20 30 40 50 Numbers of Nonzero Entries of the Sparest Vectors

60

Figure 1: Comparison of ℓ1 , ℓq , and OGA methods for sparest solutions Finally we present the following Proposition 5.15 Given 0 < q < 1 and the original s-sparse vector x, if Condition (21) holds, i.e. if √ γ2t − 1 < 4( 2 − 1)

 1/q−1/2 t s

for some integer t ≥ s,

then there exists ζ > 0 such that, for any nonnegative ǫ less than ζ, the vector x is exactly recovered by solving (27). The constant ζ depends only on N, q, x, γ2t , and the ratio s/t. Numerical results show that our ℓq approximation method works well. In Figure 1, we present the frequencies of the exact recovery using various methods for Gaussian random matrix of size 128 × 512 for various sparse vectors. For each sparsity, we randomly generate the Gaussian random matrix Φ and 26

322

LAI: UNDERDETERMINED SYSTEMS

a vector x with the given sparsity and tested various methods to solve the x for 100 times. The number of exact recovery by each method is divided by 100 to obtain the frequency for the method.

References [1] Baraniuk, R., M. Davenport, R. DeVore, and M. B. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation, to appear, 2008. [2] Buldygin, V. V. and Yu, V. Kozachenko, Metric Characterization of Random Variables and Random Processes, AMS Publication, Providence, 2000. [3] Cand´es, E. J. Compressive sampling. International Congress of Mathematicians. Vol. III, 1433–1452, Eur. Math. Soc., Z”urich, 2006. [4] Cand´es, E. J., J. K. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math. 6 (2006), 2, 227–254. [5] Cand´es, E. J., J. K. Romberg, J. K. and T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59 (2006), 1207–1223. [6] Cand´es, E. J. and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (2005), no. 12, 4203–4215. [7] Cand´es, E. J. and T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies, IEEE Trans. Inform. Theory 52 (2006), no. 12, 5406–5425. [8] Cand´es, E. J., M. Watkin, and S. Boyd, Enhancing Sparsity by Reweighted l1 Minimization, manuscript, 2007. [9] R. Chartrand, Nonconvex compressed sensing and error correction, in International Conference on Acoustics, Speech, and Signal Processing, IEEE, 2007.

27

LAI: UNDERDETERMINED SYSTEMS

[10] R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Letters, 14(2007), 707–710. [11] R. Chartrand and V. Staneva, Restricted isometry properties and nonconvex compressive sensing, Inverse Problem, to appear, 2008. [12] M. Davies and R´emi Gribonval, Restricted Isometry constants where ℓq sparse recovery can fail for 0 < q ≤ 1, manuscript, 2008. [13] Donoho, D. L., Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), 1289–1306. [14] Donoho, D. L., Sparse components of images and optimal atomic decompositions. Constr. Approx. 17 (2001), 353–382. [15] D. L. Donoho, Unconditional bases are optimal bases for data compression and for statistical estimation, Appl. Comput. Harmonic Anal., 1(1993), 100–115. [16] D. L. Donoho, Unconditional bases and bit-level compression, Appl. Comput. Harmonic Anal., 3(1996), pp. 388–92. [17] Donoho, D. L. and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization, Proc. Natl. Acad. Sci. USA 100 (2003), no. 5, 2197–2202. [18] Donoho, D. L., M. Elad, and V. N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory, 52 (2006), 6–18. [19] Donoho, D. L. and J. Tanner, Sparse nonnegative solution of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. USA 102 (2005), no. 27, 9446–9451. [20] Elad, M. and A. M. Bruckstein, IEEE Trans. Inf. Theory 48(2002), 2558–2567. [21] S. Foucart and M. J. Lai, Sparsest Solutions of Underdetermined Linear Systems via ℓq minimization for 0 < q ≤ 1, to appear in Applied Comput. Harmonic Analysis, 2009.

28

323

324

LAI: UNDERDETERMINED SYSTEMS

[22] Geman, S., A limit theorem for the norm of random matrices, Ann. Prob., 8(1980), 252–261. [23] Gribnoval, R and M. Nielsen, Sparse decompositions in unions of bases, IEEE Trans. Info. Theory, 49(2003), 3320–3325. [24] I. Kozlov and A. Petukhov, ℓ1 greedy algorithm for sparse solutions of underdetermined linear system, manuscript, 2008. [25] M. J. Lai, Restricted isometry property for sub-Gaussian random matrices, manuscript, 2008. [26] Ledoux, M., The concentration of measure phenomenon, AMS Publication, Providence, 2001. [27] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann, Uniform uncertainty principle for Bernoulli and subgaussian ensembles, Constructive Approx. 28(2008) 277–289. [28] S. Mendelson, A. Pajor, N. Tomczack-Jaegermann, Reconstruction and subgaussian operators in asymptotic geometric analysis, Geometric and Functional Analysis 17(2007), 1248-1272. [29] Natarajan, B. K., Sparse approximate solutions to linear systems, SIAM J. Comput., vol. 24, pp. 227234, 1995. [30] Needell, D. and R. Vershynin, Uniform uncertainty principal and signal recovery via regularized orthogonal matching pursuit, manuscript, 2007. [31] Petukhov, A., Fast implementation of orthogonal greedy algorithm for tight wavelet frames, Signal Processing, 86(2006), 471–479. [32] G. Pisier, Probabilistic Methods in the Geometry of Banach Spaces, Springer Verlag, Lecture Notes in Mathematics, No. 1206, 1986. ¨ Yilmaz, Sparse recovery by nonconvex optimization – [33] R. Saab and O. instant optimality, manuscript, 2008. [34] Temlyakov, V. N., Weak greedy algorithms, Adv. Comput. Math. 12 (2000), 213–227.

29

LAI: UNDERDETERMINED SYSTEMS

325

[35] Temlyakov, V. N., Nonlinear methods of approximation, Foundations of Comp. Math., 3 (2003), 33–107. [36] Tropp, J. A., Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inf. Theory, 50 (2004), 2231–2242. [37] Wachter, K. W., The strong limits of random matrix spectra for sample matrices of independent elements, Ann. Prob., 6(1978), 1–18.

6

Appendix 1: Gaussian Random Matrices

Let A = [aij ]1≤i≤m,1≤j≤n be a rectangular matrix with aij being iid Gaussian random variables with mean zero and variance σ 2 . Let x = (x1 , · · · , xn )T ∈ Rn be a vector. We use kxk2 denotes thePnorm of x. Consider a random variable X = (X1 , · · · , Xm )T with Xi = ( nj=1 aij xj )2 , i = 1, · · · , m. Since E(aij ) = 0, we have E(Xi ) = σ 2 kxk22 for all i. Let ξi = Xi − E(Xi) be a new random variable and let m X Sm = ξi i=1

be the sum of these new independent random variables. It is easy to see that Sm = kAxk22 − mσ 2 kxk22 . In this section, we are interested in proving the following inequality. Theorem 6.1 For any ǫ > 0, the probability P(|kAxk22 − mσ 2 kxk22 | < ǫkxk22 ) ≥ 1 − 2 exp(−

ǫ2 m ), (mσ 2 )(cǫ + 2mσ 2 )

(29)

where c is a positive constant independent of ǫ and kxk2 . We plan to use the Bernstein inequality (cf. [Buldygin andKozachenko’00, p.27]) to prove this result. For convenience, we state the inequality below. Theorem 6.2 Suppose that ξi , 1 ≤ i ≤ m are independent random variables Pm with E(ξi) = 0 and E(ξi2) = νi2 < ∞, 1 ≤ i ≤ m. Let Sm = i=1 ξi . Moreover, suppose that there exists a constant H > 0 such that |E(ξik )| ≤

m! 2 k−2 ν H 2 i

30

(30)

326

LAI: UNDERDETERMINED SYSTEMS

for all integer k > 1 and all i = 1, · · · , m. Then the following inequality holds for all t > 0: the probability   t2 P P(|Sm | > t) ≤ exp − . 2 2(tH + m i=1 νi ) Proof. (The proof of Theorem 6.1.) We need to study Eq. (30) for ξi = Xi − E(Xi) for k ≥ 3 since for k = 2, Eq. (30) is satisfied trivially. For convenience, let µ = E(Xi ) = σ 2 kxk22 . It is easy to see E(|ξi|2 ) = 2µ2 . Thus, νi2 = 2µ2 . For k ≥ 3, we have k   X k E(|ξi| ) = E((Xi − µ) ) = E(Xij )(−1)k−j µk−j . j j=0 k

k

Let us spend some effort to compute E(Xij ). We have E(Xij )

= E(

n X

X

aij xj )2j =

j=1

j1 +···+jn =2j

(2j)! n 1 j2 E(aji,1 ai,2 · · · aji,n )xj11 · · · xjnn . j1 ! · · · jn !

Note that E(aℓij ) = 0 for all odd integers ℓ and it is known (using integration ℓ! by parts) that E(aℓij ) = ℓ/2 σ ℓ for even integers ℓ. Since aij are iid 2 (ℓ/2)! random variables, we have E(Xij ) =

X

2j1 +···+2jn =2j

(2j)! 2jn 2j1 1 2j2 E(a2j · · · (xn )2jn i,1 ai,2 · · · ai,n )(x1 ) (2j1 )! · · · (2jn )!

(2j)! X j! (2j1 )! · · · (2jn )! 2j n σ (x1 )2j1 · · · x2j n j! j +···+j =j (2j1 )! · · · (2jn )! 2j j1 ! · · · jn ! n 1 n (2j)!σ 2j X 2 j ( x) = 2j j! j=1 j =

=

(2j)! 2j (2j)! j σ kxk2j µ. 2 = j 2 j! 2j j!

Thus, k

E(|ξi| ) ≤

k   X k (2j)! j=0

j

31

2j j!

µj µk−j .

LAI: UNDERDETERMINED SYSTEMS

By using Stirling’s formula, we have k

|E(|ξi| )| ≤

(2j)! ≤ 2j j!/2 ≤ 2j k!/2 and hence, 2j j!

k   j X k 2 k!

j

327

µk

2 k! k k k! 4 9 = 3 µ = 2σ kxk42 3k−2 (σ 2 kxk22 )k−2 2 2 2 k! 4 4 k−2 ≤ 2σ kxk2 H 2

with H = 13.5σ 2 kxk22 . Theorem6.2, we have P (|kAxk22

− mσ

2

j=0

That is, Eq.

kxk22 |

(30) is satisfied for k ≥ 3.

t2 > t) ≤ 2 exp − 2(t13.5µ + 2mµ2 ) 



.

By

(31)

Choosing t = ǫkxk22 , we have t13.5µ + 2mµ2 = σ 2 kxk42 (13.5ǫ + 2mσ 2 ) and the above probability yields:   ǫ2 m 2 2 2 2 P (|kAxk2 − mσ kxk2 | > ǫkxk2 ) ≤ 2 exp − . (32) 2(mσ 2 )(13.5ǫ + 2mσ 2 ) In other word, the desirable result √ of Theorem 6.1 is proved. We remark that when σ = 1/ m, the estimate Eq. (29) gives a proof of Theorem 3.11. For this special case, we have Theorem 6.3 Suppose that ξ is a Gaussian random variable with mean zero and variance σ 2 . Let A be an m × n matrix whose entries are iid copies of ξ. For any ǫ > 0, the probability  2  ǫm 2 2 2 2 2 P(|kAxk2 − mσ kxk2 | < ǫnσ kxk2 ) ≥ 1 − 2 exp − , (33) c where c is a positive constant independent of ǫ and kxk2 . Proof. (The Proof of Theorem 6.3.) For a random matrix A of size m × n with entries aij being iid Gaussian random variables with zero mean and variance σ 2 , then we use ǫmσ 2 for ǫ in Eq. (32). Then we have   2 ǫ m 2 2 2 2 2 P( kAxk2 − mσ kxk2 > ǫmσ kxk2 ) ≤ 2 exp − . (34) 2(ǫ13.5 + 2) This completes a proof of Theorem 6.3. 32

JOURNAL 328 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 328-335, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

PATTERNS P   P  (  )  (0,1  0,1) Milton del Castillo Lesmes Acosta Universidad Distrital Francisco José de Caldas, Bogotá, Colombia.

ABSTRACT. We will find, from an approximation point of view, patterns, from uniform patterns to fractal patterns.

1. RATIONAL NUMBERS.

p  p  , q   , q  0 ,  the set of integers. We will consider q 

Given the set   

m   n   m  0,1, 2,...n  for n  1, 2,... then n  

   0,1    n n 1

In one-dimensional finite elements a 10-element model of 0,1 is illustrated in Fig. 1 where the connectivity is important to the global model.

Fig. 1 In two-dimensional finite elements a model of 0,1  0,1 is illustrated in Fig. 2, connectivity becomes more complicated (triangular elements is more popular)

ACOSTA: PATTERNS P

329

Fig. 2 The sets  and    (and so on) appear similar to all levels of magnification. Then we will follow a construction, specifically of    0,1 



  n as a projection of the set n 1

10  10 is as illustrated in Fig. 3

Fig. 3



 n 1

n

 n . The set

330

ACOSTA: PATTERNS P

And in the following figures, Fig. 4, Fig. 5, Fig. 6, there are cases for different values of p in p



n

 n

n1

Fig. 4

Fig. 5

ACOSTA: PATTERNS P

331

Fig. 6

 a  0,1,..., c   a b    This sets are specifically the sets of points  ,  b  0,1,..., c  ,for a specified p   .  c c  c  1, 2,..., p    Patterns

due

to

transformations

defined

for

functions f and

g

of

the

type

 x, y    f ( x, y), g ( x, y)  over a pattern, carry new patterns as in the Fig. 7, Fig. 8, Fig. 9, Fig.



10, applying for example  x, y   x 2 , y 2



p

over the set

 n1

Fig. 7

n

 n

332

ACOSTA: PATTERNS P

Fig. 8

Fig. 9

ACOSTA: PATTERNS P

333

Fig. 10

Working with other sets for example Farey sequences, we can get more patterns like the following in Fig. 11

Fig. 11

334

ACOSTA: PATTERNS P

2. NEW PATTERNS FROM PATTERNS. To construct a periodic function f :    with the condition f ( x  1)  f ( x) we need only to have g :    with g ( x)  f ( x) for x  0,1 and then f ( x)  g ( x  x(mod(1)) , where

x  x mod(1) means that for k   we have x  x  k  0,1 . In another way the same thing is obtained if we write f ( x)  g ( x   x) using the integer part function.

3. REPEATING A LOCATED PATTERN TO FIND NEW PATTERNS.

1



1



Take the family of lines from the point 0,1 to  , a  and from the point 1,1 to  , a  2  2  defined as

 x  det  0 1  2

  x y 1   1 1  0 , det  1  1 a 1   2

 y 1  1 1  0  a 1 

that is y  2 x(a  1)  1 and y  2 x(1  a)  2a  1 respectively for a  0, 0.1, 0.2,....,1 with the restriction x  0,1 as is shown in Fig. 12.

Fig. 12

then y   y  2( x   x)(a  1)  1 and y   y   2( x   x)(1  a)  2a  1 for

a  0, 0.1, 0.2,....,1 produce the pattern as shown in Fig. 13

ACOSTA: PATTERNS P

Fig. 13

4. BIBLIOGRAPHY Barnsley, Michael F. , and Hawley Rising. Fractals Everywhere. Boston: Academic Press Professional, 1993. Fugal D. Lee. Conceptual Waveletes in digital signal processing. 2007 Mandelbrot, Benoît B. The Fractal Geometry of Nature. New York: W. H. Freeman and Co., 1982 Peitgen, Heinz-Otto, and Dietmar Saupe, eds. The Science of Fractal Images. New York: SpringerVerlag, 1988 The Derive - Newsletter. Derive Users Group. 71/72. Austria 2008

335

JOURNAL 336 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 336-343, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

A note on the construction of the sl(2, R) integral for ordinary differential equations of maximal symmetry Sibusiso Moyo Department of Mathematics, Statistics and Physics, Steve Biko Campus, Durban University of Technology, PO Box 953, Durban 4000, South Africa. e-mail:[email protected] March 15, 2009 We discuss some of the integrability properties of ordinary differential equations of maximal symmetry. The relationship between the solution symmetries, the fundamental integrals and the sl(2, R) integrals is discussed. A linear combination of the fundamental integrals also leads to an integral which has the sl(2, R) subalgebra. The construction of the sl(2, R) integral for order n ≥ 9 can be tedious and hence requires a more compact generating function. Here an illustration of such a construction is given and the integral is constructed from the direct integration of the differential equation after multiplying by one of the autonomous integrating factors. It is also important to note that symmetries and first integrals form the underlying mathematical basis for the algebraic theory of integrable equations. Keywords: Symmetry, Lie-algebra, maximal symmetry, Integral

1

Introduction

We recall that Lie ([1],p 405) showed that the maximum number of point symmetries for second order ordinary differential equations is 2 + 6 and for higher order equations (n ≥ 3) is n + 4. In addition he further showed that an nth order equation which possesses n + 4 point symmetries is equivalent to y (n) = 0, where (n) denotes dn /dxn , under a point transformation X = F (x, y) Y = G(x, y). For n ≥ 3, X = F (x) which is called a fibre preserving transformation [2]. Lie showed that every linearisable second-order ordinary differential equation has the form y 00 = p(x, y)y 03 + q(x, y)y 02 + r(x, y)y 0 + s(x, y), (1) 1

MOYO: sl(2,R) integral

337

where the coefficients p, q, r and s satisfy the conditions A = 0 and B = 0, where A = 2qxy − 3pxx − ryy − 3px r + 3py s + 2qx q − 3rx p − ry q + 6sy p B = 2rxy − qxx − 3syy − 6px s + qy s + 3qy s − 2ry r − 3sx p + 3sy q. It is well known that when a symmetry is used to determine a first integral for a differential equation, the symmetry provides an integrating factor for the equation and remains as a symmetry of the first integral. A considerable amount of work on constructing integrating factors for ordinary differential equations has been done including the works by Cheb-Terrab and Roche in 1999 [4] and Leach and Bouquet in 2002 [6]. From these works one concludes that there is a strong link between the existence of symmetries be they local or nonlocal and the integrability of the given equation or systems of equations.

2

First Integrals and Integrating Factors

For purposes of this discussion we give a definition of a symmetry of a differential equation as follows: Definition: An nth order ordinary differential equation E(x, y, y 0 , ..., y (n) ) = 0,

(2)

admits the one-parameter Lie-group of point transformations x ¯ = X(x, y; ) = x + ξ(x, y) + O(2 ) y¯ = Y (x, y; ) = y + η(x, y) + O(2 ),

(3)

with infinitesimal generator G = ξ(x, y)∂x + η(x, y)∂y if the condition

h i G[n] E(x, y, y 0 , ..., y (n) )

(4) =0

(5)

E=0

holds, where G[n] is the nth extension of G needed to transform the derivatives in E = 0 given by    n  i−1   X X i G[n] = G + η (i) − y (j+1) ξ (i−j) ∂y(i) . (6) j   i=1

j=0

Here the indices deonote total differentiation with respect to x. (See Bluman and Kumei [3].) Definition: In addition a first integral I for an equation of maximal symmetry E = y (n) = 0 is defined as I = f (y, y 0 , y 0 , ..., y (n−1) ) 2

338

MOYO: sl(2,R) integral

where

dI df = 0 ⇐⇒ = 0. dx |E=0 dx |E=0

(7)

This means that if h(x, y, y 0 , y 00 , ..., y (n−1) ) is an integrating factor then dI = hE(x, y, y 0 , ..., y (n) )|E=0 = 0. dx |E=0

3

(8)

Some algebraic properties of equations of maximal symmetry

According to Lie’s classification [1] the real unimodular group sl(2, R) is provided by the Lie algebras spanned by the vectors ∂x ,

x2 ∂x + 2xy∂y .

(9)

(x2 − y 2 )∂x + 2xy∂y .

(10)

x∂x + y∂y ,

or ∂x ,

x∂x + y∂y

As an example, we consider the well-known third order ordinary differential equation of maximal symmetry y 000 = 0 (11) which has seven Lie-point symmetries given as G1 G4 G5

= ∂y , G2 = x∂y , G3 = x2 ∂y = y∂y = ∂x , G6 = x∂x + y∂y , G7 = x2 ∂x + 2xy∂y

with algebra 3A1 , {sl(2, R) ⊕s A1 }. Remark The first three are solution symmetries and provide a basis for the solution of (11), that is, 1, x, x2 , and y = c1 + xc2 + x2 c3 . G4 is the homogeneity symmetry to indicate that the equation is autonomous and G5 , G6 , G7 form the sl(2, R) subalgebra. Proposition 1 The autonomous integrating factors for (11) are given by y and y 00 . Using y 00 as the integrating factor and integrating the resulting expression, y 00 y 000 = 0, once leads to 21 y 002 = k, where k is a constant of integration. For k = 0 and k 6= 0 we have the sl(2, R) subalgebra and the algebra is sl(3, R) : 2A1 ⊕s {sl(2, R) ⊕ A1 } ⊕ 2A1 . It is also a well-known result that all equations of maximal symmetry in the form y (n) = 0 have the sl(2, R) element(see [13]) and references therein). Otherwise if y 00 = k is treated as a function we have G1 = ∂y , G2 = x∂y , G3 = ∂x , G4 = x∂x + 2y∂y . The algebra in this case is A14,9 : A2 ⊕s 2A1 . Proposition 2 If y is an integrating factor of the autonomous equation of maximal symmetry y (n) = 0 then the integral obtained using this integrating factor will always have the sl(2, R) subalgebra. 3

MOYO: sl(2,R) integral

339

• In the case of equation (11) using y as an integrating factor and integrating the equation obtained once as before gives yy 00 − 12 y 02 = k. This equation has the sl(2, R) subalgebra. Infact the integrated equation can be written k as (y 1/2 )00 = (y1/2 which is in the form of the Ermakov-Pinney equation )3 [7, 8]. A point transformation v = y 1/2 leads to the equation v 00 = k/v 3 . In this case k = 0 retains the second order differential equation and k 6= 0 gives the point symmetries G1 = ∂x , G2 = 2x∂x + v∂v , G3 = x2 ∂x + xu∂u . • The Ermakov-Pinney equation possesses the three-element algebra of Lie point symmetries sl(2, R) which is characteristic of all scalar ordinary differential equations of maximal symmetry (see [10, 12]). Proposition 3 The equation of maximal symmetry in the form y (n) = 0 will have as one of its autonomous integrating factors y iff n is odd. As an example we can easily verify that y 000 = 0, y (v) = 0 and y (vi) = 0 will all have y as one of the integrating factors. In the case where n is even y does not appear as one of the integrating factors.

4

Occurrence of sl(2, R) integrals

For the purposes of this section we shall call the fundamental integrals as those that emanate from the solution symmetries of the differential equation. The sl(2, R) integrals will be the integrals that arise as a linear combination of the fundamental integrals and possess the sl(2, R) subalgebra. For example, the integrating factors for (11) are 1, x, 21 x2 , from which we observe that there are also the coefficients of the solution symmetries of the differential equation. Associated with each of these are the integrals shown below: 1 2 000 2x y 000

x.y 1.y 000

=0 =0 =0

I1 = 12 x2 y 00 − xy 0 + y I2 = xy 00 − y 0 I3 = y 00 .

−→

(12)

In Flessas et al [14, 15] the numbering of the integrals is according to the one given above. We consider the fifth order-ordinary differential equation, y v = 0,

(13)

with autonomous integrating factors y, y 00 and y iv for illustrative purposes. Multiplying (13) by y and integrating the resulting equation gives 1 yy iv − y 0 y 000 + y 002 = J. 2

4

(14)

340

MOYO: sl(2,R) integral

• The integral in (14) for J 6= 0 has the point symmetries G1 G2 G3

= ∂x = x∂x + 2y∂y = x2 ∂x + 4xy∂y .

(15) (16) (17)

In the case that J = 0 we just obtain the four point symmetries which are G1 = ∂x , G2 = x∂x , G3 = y∂y and G4 = x2 ∂x + 4xy∂y . It is noted that the integral obtained using the integrating factor y always has sl(2, R) subalgebra according to proposition 2. Equation (13) has a basis of solutions 1, x, x2 , x3 , x4 so that each of these give us the solution symmetries, G1 G2 G3 G3 G3

= = = = =

∂x x∂x x2 ∂x x3 ∂x x4 ∂x.

Associated with these solution symmetries are the fundamental integrals x4 y v x3 y v x2 y v xy v 1y v

=0 =0 =0 =0 =0

−→

I0 I1 I2 I3 I4

1 4 iv = 24 x y − 61 x3 y 000 + 21 x2 y 00 − xy 0 + y 1 3 iv = 6 x y − 21 x2 y 000 + xy 00 − y 0 = 21 x2 y iv − xy 000 + y 00 = xy iv − y 000 = y iv .

(18)

Proposition 4 For the fifth order equation, y v = 0, the autonomous integral emanating from the integrating factor y appearing in equation (14) can be obtained from the linear combination J = I0 I4 − I1 I3 + 21 I22 where the Ii , i = 0, 1, 2, 3, 4 are as given in (18) above. We now consider the case for which n = 9, that is, the equation which was mentioned but not treated in [13]. In this case the equation takes the form y ix = 0.

(19)

The autonomous integrating factors corresponding to (19) are y, y 00 , y iv , y vi and y viii . If we use y in (19) as the integrating factor and integrating the subsequent equation we obtain 1 2 yy viii − y 0 y vii + y 00 y vi − y 000 y v + y iv = J. 2 The integral in (20) has Lie-point symmetries 5

(20)

MOYO: sl(2,R) integral

G1 G2 G3

341

= ∂x = x∂x + 4y∂y = x2 ∂x + 8xy∂y

which form the sl(2, R) subalgebra as expected. The rest of the integrals corresponding to the other integrating factors are given as follows: y 00 y ix y 0000 y ix y vi y ix viii ix y y

−→

y 00 y viii − y 000vii + y 0000 y vi − 21 y v 2 y 0000 y viii − y v y vii + 21 y vi 2 y vi y viii − 12 y vii 1 viii2 . 2y

2

(21)

Moreover it is important to note that the higher the order the more tedious the expressions become for the fundamental integrals hence the need for a more compact generating function for the fundamental integrals. In the case of the 9th order ordinary differential equation we list these just to illustrate the point as follows: x8 y ix

I0 = x8 y viii − 8x7 y vii + 56x6 y vi − 336x5 y v + 1680x4 y iv − 6720x3 y 000 + 20160x2 y 00 − 40320xy 0 + 40320y

x7 y ix

I1 = x7 y viii − 7x6 y vii + 42x5 y vi − 210x4 y v + 840x3 y iv − 2520x2 y 000 + 5040xy 00 − 5040y 0

x6 y ix

I2 = x6 y viii − 6x5 y vii + 30x4 y vi − 120x3 y v + 360x2 y iv − 720xy 000 + 720y 00

x5 y ix

I3 = x5 y viii − 5x4 y vii + 20x3 y vi − 60x2 y v + 120xy iv − 120y 000

x4 y ix

I4 = x4 y viii − 4x3 y vii + 12x2 y vi − 24xy v + 24y iv

x3 y ix

I5 = x3 y viii − 3x2 y vii + 6xy vi − 6y v

x2 y ix

I6 = x2 y viii − 2xy vii + 2y vi

1y ix

−→

I7 = y viii .

(22) This then makes the construction of J using the fundamental integrals a daunting exercise as the order of the equation increases. However one can verify 2 that the integral J = yy viii − y 0 y vii + y 00 y vi − y 000 y v + 21 y iv can also be obtained as a linear combination of the Ii ’s with i = 0, 1, 2, 3, 4, 5, 6, 7 above.

6

342

MOYO: sl(2,R) integral

5

Conclusion

We have discussed some aspects on the integrability of ordinary differential equations. The discussion extended to the relationship between the basis set of solutions of an nth order ordinary differential equation of order n, its solution symmetries and associated fundamental integrals. Of interest was the integral constructed from these fundamental integrals. As the order of the equation increases, particularly, from order n ≥ 9 the construction of the sl(2, R) integral from the fundamental integrals would involve very long expressions and hence a need for a construction of a more compact generating function to generate all such integrals. Otherwise the same sl(2, R) integral can be obtained by direct integration from the original equation after multiplying it by the autonomous integrating factor y. We also observe that the sl(2, R) integral only occurs in y (n) = 0 if n is odd and n ≥ 3. This discussion needs to be extended to all other equations of maximal symmetry and not necessarily with the simplest case considered here and to extend it to generalised symmetries.

6

Acknowledgments

The author thanks the NRF and the Durban University of Technology for their support.

References [1] S Lie Differentialgleichungen, Reprinted by Chelsea, New York, 1967. [2] L Hsu & N Kamran, Classification of second order ordinary differential equations admitting Lie groups of fibre-preserving point symmetries Proc. London Math. Society, 58, 387-416 (1989). [3] G. W Bluman & S Kumei (1989) Symmetries and Differential Equations, Springer-Verlag, New York, 1989. [4] E. S Cheb-Terrab and A.D.Roche, Integrating factors for second order ordinary differential equations, J. Sym. Computation, 27, 501-519 (1999). [5] B Abraham-Shrauner, Hidden Symmetries, First Integrals and reduction of order of nonlinear ordinary differential equations J. Non. Math. Physics, 9, Sup 2, 1-9 (2002). [6] P.G.L Leach & S.E Bouquet, Symmetries and integrating factors J. Non. Math. Physics 9, Sup 2, 73-91 (2002). [7] V. Ermakov, Second order differential equations. Conditions of complete integrability,(translated by A O Harin), Univ. Izvestia Kiev Ser III, 9, 1-25 (1880).

7

MOYO: sl(2,R) integral

[8] E. Pinney, The nonlinear differential equation y 00 (x) + p(x)y + cy −3 = 0 Proc. Amer. Math. Society, 1, 681 (1950). [9] A. K Head, LIE, a PC program for Lie analysis of differential equations, Comp. Phys. Communication, 77, 241-248 (1993). [10] F. M Mahomed & P. G. L Leach, Symmetry Lie algebras of nth order ordinary differential equations, J. Math. Anal. Applications, 151, 80-107 (1990). [11] F. M Mahomed, Symmetry group classification of ordinary differential equations:P Survey of some results, Math. Meth. Appl. Sciences, 30, 19952012 (2007). [12] S Moyo & P. G. L Leach, Exceptional properties of second and third order ordinary differential equations of maximal symmetry, J. Math. Anal. Applications, 252, 840-865 (2000). [13] S Moyo & P.G.L Leach, Symmetry properties of autonomous integrating gactors, Symme., Integrab. Geom. Meth. Applications (SIGMA), 1, paper 023 (2005). [14] G.P Flessas, K.S Govinder & P.G.L Leach, Remarks on the symmetry Lie algebras of first integrals of scalar third order ordinary differential equations with maximal symmetry, Bulletin of the Greek Mathematical Society, 36, 63-79 (1994). [15] G.P Flessas, K.S Govinder & P.G. L Leach PGL, Characterisation of the algebraic properties of first integrals of scalar ordinary differential equations of maximal symmetry, J. Math. Anal. Applications, 212, 349-374 (1997). [16] P.G.L Leach, K.S Govinder and B Abraham-Shrauner, (1999), Symmetries of first integrals and their associated differential equations, J. Math. Anal. Applications, Vol 235, 58-83 (1999).

8

343

JOURNAL 344 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.2, 344-352, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

MAC solution for a rectangular membrane Igor Neygebauer November 25, 2008 Abstract A rectangular membrane with fixed boundary conditions under applied a transversal force has the solution with singularity. That is the Green’s function of this problem. But if a string is considered, which could be taken as a strip from the above membrane under the same force then the Green’s function has not a singularity. The MAC solution for the membrane will be considered. This solution transforms the above Green’s function with singularity into the MAC function which does not have a singularity. The obtained MAC solution for membrane corresponds to the Green’s function of a string. KEY WORDS: membrane, Laplace equation, MAC solution.

1

Introduction

Membranes are considered in biology, chemistry and physics. Information about experiments and theories is presented in Internet by the companies Lockheed, Volvo, U.S. Army Research Office etc. A number of journals and problems concerning membrane theories are presented in references (1) - (12). We can conclude that the membrane problem is important and it is under consideration of many research groups. The Green’s function method is an important approach in membrane theory, it is widely used in micromechanics and nanomechanics too (4), (9). The mathematical aspect of the mechanical membrane is considered in this paper. The MAC solution of the stated problem will be considered instead of the strong or week solutions (1), which are usually under consideration. It will be shown that the strong and the week solutions are not physically acceptable in the case when one part of the boundary conditions is given at the point inside the domain which a membrane is occupied. The MAC model of the membrane will be obtained, which solution is called MAC solution. If the classical equation of the membrane under small deformation is a wave equation then the MAC equation is an integro-differential equation. The conformal mapping is used to create the MAC Green’s function and the method of superposition is applied to create the MAC model.

1

NEYGEBAUER: MAC solution...

2

345

Circular elastic membrane under point load

2.1

Statement of the membrane problem

The potential energy of the initially plane elastic membrane is Z Z q U = T0 · ( ( 1 + ux 2 + uy 2 − 1)dx dy);

(1)

D

where membrane lies in plane (x, y) in its natural state, T0 is its tension on a unit of length, u(x, y) - transversal displacement of the point (x, y) of the initially plane membrane, D - domain of the plane (x, y) occupied by a membrane, external forces are not applied. Apply the following boundary conditions: u|∂D = 0,

(2)

u(a, b) = u0 6= 0,

(3)

∂D is the boundary of the boundary of D without a point (a, b), which is an internal point in D. We will consider the linearized equations of the membrane with the elastic potential energy: Z Z T0 ·( (ux 2 + uy 2 )dx dy); (4) U= 2 D where it ie supposed that |ux | 0, s . . . length of the segment, s > 0. Based on a detailed knowledge of DTWT, it is possible to deduce fairly sophisticated rules how to handle the signal segments. It is clear that it is necessary to extend every segment from the left by an exact number of samples from the preceding segment, and from the right by another number of samples from the subsequent segment (extension, overlap). However, the number of such samples depends on m, J and s, and it can be shown that every segment has to be extended by a different length from the left and from the right, and these lengths can also differ from segment to segment! And, of course, the first and the last segments have to be handled in a particular way. 4

397

Pavel Rajmic

Figure 2: Scheme of signal segmentation. The input signal x (a) is divided into segments of equal length and the last one can be shorter than this (b).

3.1

Important Theorems Derived from DTWT Algorithm

Before we introduce a detailed description of the SegWT algorithm, several theorems must be presented. More theorems including proofs can be found in [6, ch. 8]. We assume that the input signal x is divided into S ≥ 1 segments of equal length s. Single segments will be denoted 1x,2 x, . . . ,S x. The last one can be less long than s and the number S does not have to be known in advance. See Fig. 2. The signal boundary treatment considered in this paper is “zero-padding”, when the boundaries are extended by zeros (most suitable for processing audio recordings, for example), but switching to another type of treatment is easy. By the formulation that two sets of coefficients from the k-th decomposition level follow-up on each other we mean a situation when two consecutive segments are properly extended, see Figures 2 and 3, so that applying the DTWT of depth k, with step 2a) omitted (cf. Algorithm 3, page 9), separately to both the segments, let us say nx and n+1x, and joining the resultant coefficients together leads to the situation that the last coefficient computed from nx and the first coefficient computed from n+1x would be neighboring in case the signal is transformed by the ordinary DTWT. Such a situation is desirable and the theorems below lead to proper handling of the consecutive segments. Theorem 1 In case that the consecutive segments have r(k) = (2k − 1)(m − 1)

(1)

common input signal samples, the coefficients from the k-th decomposition level follow-up on each other. Thus, for a decomposition depth equal to J it is necessary to have r(J) = (2J −1)(m−1) common samples in the two consecutive segments after they have been extended. This extension must be divided into the right extension of the first segment (of length R) and the left extension of the following segment (of 5

398

Algorithms for Segmentwise Computation of . . . Wavelet Transform

Figure 3: Illustration of extending the segments. length L), while r(J) = R + L. However, the lengths L, R ≥ 0 cannot be chosen arbitrarily. In general, the numbers L and R are not uniquely determined and must comply with strict rules that will be shown. The formula for the choice of extension Lmax , which is unique and the most appropriate in the case of real-time signal processing, is given in Theorem 2. For the purpose of the following, we assign the number of the respective segment to the variables Lmax , Rmin , l, so that the left extension of the n-th segment will be of length Lmax (n), the right extension will be of length Rmin (n) and the length of the original n-th segment with the left extension joined will be denoted l(n). Theorem 2 Let the n-th segment be given, whose length including its left extension is l(n). The maximum possible left extension of the next segment, Lmax (n + 1), can be computed by the formula µ ¶ l(n) − r(J) J Lmax (n + 1) = l(n) − 2 ceil . (2) 2J The minimum possible right extension of the given segment is then Rmin (n) = r(J) − Lmax (n + 1).

(3)

Theorem 3 The length of the right extension of the n-th segment, must comply with ³ ns ´ Rmin (n) = 2J ceil J − ns, n = 1, 2, . . . , S − 2. (4) 2 From (4) it is clear that Rmin is periodic with respect to s with period 2J , i.e. Rmin (n + 2J ) = Rmin (n). This property, among other things, can be seen in Table I. 6

399

Pavel Rajmic

Theorem 4 (on the total length of segment) After P the extension, the n-th segment of original length s will be of total length (n), which can acquire one of two values, either ³ s ´ ³ s ´ or r(J) + 2J ceil J − 2J . (5) r(J) + 2J ceil J 2 2 The overall illustration of SegWT can be seen in Fig. 4. The particular algorithms are described in detail in the next sections.

3.2

Algorithm of Forward SegWT

The algorithm works such that it reads (receives) individual segments of the input signal, makes them extend each other in a proper way, then it computes the wavelet coefficients in a modified way and, in the end, it easily joins the coefficients. There is no need to know how many segments will be in total, we only require that in the moment when the last segment is received, we know that information. Algorithm 2 Let the wavelet filters g and h be of length m and the decomposition depth be J. The boundary treatment mode is “zero-padding”. The segments of length s > 0 of the input signal x are denoted 1x,2 x,3 x, . . . The last segment contains s0 ≤ s samples. 1. Set n = 1, last = 0. 2. Read the first segment, 1x. Extend it from the left by r(J) zero samples. Update ‘last’. 3. If, at the same time, the n-th segment is the last one (a) Extend the n-th segment from the right by such a number of zero samples that its total length will be Lmax (n) + s. (b) Extend the n-th segment from the right by r(J) zero samples. (c) Compute the transform of depth J of the extended segment using Algorithm 3. (d) Modify the vectors containing the wavelet coefficients by trimming off a certain number of redundant coefficients from the left side, specifically: on the k-th level, k = 1, 2, . . . , J − 1, trim off r(J − k) coefficients. (e) Trim¡off redundant coefficients from the right so that on the k-th level ¢ floor 2−k (Lmax (S) + s0 ) coefficients remain. (f ) Trim off the vectors in the same manner as in 3d, but this time from the right. (g) Store the result as na(J) ,n d(J) ,n d(J−1) , . . . ,n d(1) . Otherwise 7

400

Algorithms for Segmentwise Computation of . . . Wavelet Transform

Figure 4: Overall scheme of the SegWT, forward and inverse parts, here in a particular case when s = 92: a) input signal segments, b) extending them (the left and right lengths differ from segment to segment), c) computation of the forward part, d) the computed blocks of coefficients, e) computation of the inverse part, f) the reconstructed signal.

8

401

Pavel Rajmic

(h) Read the (n + 1)-th segment, update ‘last’. (i) Compute Lmax (n + 1) and Rmin (n) (see Theorem 2). (j) Extend the n-th segment from the right side: If last = 1 (i.e. we have the last segment) i. Compute the difference diff = max(0, Rmin (n) − s0 ). ii. If diff > 0 (i.e. not enough samples in the last segment for extension by Rmin (n)) A. Extend the n-th segment from the right side by s0 samples from the last segment. B. Extend the n-th segment from the right side by another diff zero samples. Otherwise C. Extend the n-th segment from the right side by Rmin samples taken from the last segment. Otherwise iii. Extend the n-th segment from the right side by Rmin samples taken from the last segment. (k) Extend the (n+1)-th segment from the left side by Lmax (n+1) samples taken from segment n. (l) Compute the DTWT of depth J from the (extended) n-th segment using Algorithm 3. (m) Modify the particular vectors containing the coefficients in the same manner as in 3d. (n) Store the result as na(J) ,n d(J) ,n d(J−1) , . . . ,n d(1) . (o) Increase n by 1 and go to item 3. Algorithm 3 This sub-algorithm is identical to Algorithm 1 with the exception that we omit step 2a), i.e. we do not extend the vector. The output of Algorithm 2 is S(J + 1) vectors of wavelet coefficients { ia(J) , id(J) , id(J−1) , . . . ,i d(1) }Si=1 .

(6)

If we simply join these vectors together, we obtain a set of J + 1 vectors, which are identical to the wavelet coefficients of signal x. The flowchart of Algorithm 2 is in Fig. 5.

3.3

Corollaries and Limitations of SegWT Algorithm

In [6] several practical corollaries for SegWT can be found, e.g. that the segments cannot be shorter than 2J . From the description in the above sections it should be clear that the delay of Algorithm 2 is one segment (i.e. s samples) plus the time needed for the computation of the coefficient from the current segment. It can be easily shown that in the special case of s being divisible by 2J it even holds Rmin (n) = 0 for every n ∈ N (see Theorem 3), i.e. the delay of the forward method is determined only by the computation time! 9

402

Algorithms for Segmentwise Computation of . . . Wavelet Transform

Figure 5: Flowchart of the forward SegWT, with zero-padding treatment of the signal boundaries. The main loop, which is applied to all the segment but the first and last ones, is emphasized by the thicker line. 10

403

Pavel Rajmic

Table I: Example — lengths of extensions for different lengths of segments s. The depth of decomposition is J = 3 and the filter length is m = 16. s

n

1

2

3

4

5

6

7

8

9 10 11 12 . . .

512 Lmax (n) 105 105 105 105 105 105 105 105 105 105 105 105 . . . Rmin (n) 0 0 0 0 0 0 0 0 0 0 0 0 ... P (n) 617 617 617 617 617 617 617 617 617 617 617 617 . . . 513 Lmax (n) 105 98 99 100 101 102 103 104 105 98 99 100 . . . Rmin (n) 7 6 5 4 3 2 1 0 7 6 5 4 ... P (n) 625 617 617 617 617 617 617 617 625 617 617 617 . . . 514 Lmax (n) 105 99 101 103 105 99 101 103 105 99 101 103 . . . Rmin (n) 6 4 2 0 6 4 2 0 6 4 2 0 ... P (n) 625 617 617 617 625 617 617 617 625 617 617 617 . . . 515 Lmax (n) 105 100 103 98 101 104 99 102 105 100 103 98 . . . Rmin (n) 5 2 7 4 1 6 3 0 5 2 7 4 ... P (n) 625 617 625 617 617 625 617 617 625 617 625 617 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .

3.4

Few Examples

• For J = 5 and m = 8, the minimum segment length is 32 samples. When we set s = 256, Rmin will always be zero and Lmax = r(5) = 217. The length of every extended segment will be 256 + 217 = 473 samples. • For J = 5 and m = 8 we set s = 300, which is not divisible by 25 . Thus Rmin and Lmax will alternate with period 8 such that 0 ≤ Rmin ≤ 31 and 186 ≤ Lmax ≤ 217. The total length of segment after extension will be either 505 or 537. • (Example illustrated in Fig. 4) For J = 3, m = 4, s = 92, the extensions will alternate between two states, either Rmin = 4 and Lmax = 17 or Rmin = 0 and Lmax = 21. The length of the extended segments will be 109 or 117 samples. The increase of the samples entering the computation is naturally a price paid for the fact that no errors will originate during processing the boundaries.

3.5

Algorithm of Inverse SegWT

The inverse algorithm is described below, in less detail than the forward one. Blocks of wavelet coefficients (6) produced segment-by-segment by the forward SegWT constitute the input for the inverse algorithm. Analog to the forward case, we use the boolean flag last, which becomes true if the very last segment has to be processed. In addition to that, due to the downsampling step of the forward transform, we loss information about the total length of the signal, more precisely we do not know if the original length was a or a + 1 for some integer a. We could solve this problem by accumulating the lengths of individual inverted 11

404

Algorithms for Segmentwise Computation of . . . Wavelet Transform

segments, however, such a number could be very large, possibly overflowing the processor arithmetics. A better solution is just to keep the signal parity (i.e. if the accumulated length is is even or odd). The information is then used at the very end of the signal for deciding to cut or not to cut the last reconstructed sample. The inverse SegWT partly utilizes the overlap-add principle for joining the reconstructed pieces of the time-domain signal. The length of the overlap stays r(J) all the time. As for the illustration, we again refer to Fig. 4. Algorithm 4 Let the decomposition depth J be given, as well as wavelet recon˜ of length m, and coefficients na(J) ,n d(J) ,n d(J−1) , . . . ,n d(1) ˜ and h struction filters g for all n. 1. Set n = 1. Set last = 0. 2. If last = 1 then the Algorithm ends. 3. Read the n-th block of coefficients and update ‘last’. 4. Extend the detail coefficients: on the k-th level, k = 1, . . . , J − 1, append r(J − k) zero coefficients from the left side. 5. Compute the inverse transform of depth J using Algorithm 5. 6. If n 6= 1, recall the samples for the overlap, saved in the last cycle, and add them to the current inverted block. 7. Update the parity of the signal. 8. If last 6= 1, append the central, non-overlapping part to the output. Save the samples of the overlap of the current inverted segment for the next cycle. Otherwise Append the whole inversion to the output. Eventually crop several samples from the end of the signal. 9. The output (a segment of a time-domain signal) is now complete and prepared to be “sent”. 10. Increase n by 1 and return to item 2. Algorithm 5 This algorithm is identical to the ordinary inverse wavelet transform (i.e. upsampling – filtering – summing – cropping), but the cropping phase is omitted here. The flowchart of Algorithm 4 can bee seen in Fig. 6. 12

405

Pavel Rajmic

Figure 6: Flowchart of the inverse SegWT. The main loop is emphasized by the thicker line. 13

406

Algorithms for Segmentwise Computation of . . . Wavelet Transform

3.6

Joining Forward and Inverse Parts to Form Algorithm Capable of Real-Time Performance

The algorithms in Sec. 3.2 and 3.5 were presented separately for clarity. However, their easy integration into a simple joint loop forms a universal algorithm for any wavelet-type processing task in real time. It can be shown that in the case of s being divisible by 2J the total delay is no bigger than s samples, in other cases no bigger than 2s. Acknowlegments The paper was prepared within the framework of No. 102/06/P407 and No. 102/07/1303 projects of the Grant Agency of the Czech Republic and No. 1ET301710509 project of the Czech Academy of Sciences.

References [1] Ch. Chrisafis and A. Ortega, Line-Based, Reduced Memory, Wavelet Image Compression. IEEE Transactions on Image Processing, vol. 9, no. 3, pp. 378–389, 2000. [2] D. Darlington, L. Daudet and M. Sandler, Digital Audio Effects in the Wavelet Domain. In Proc. of the 5th Int. Conf. on Digital Audio Effects (DAFX-02), Hamburg, 2002. [3] W. Jiang and A. Ortega, Lifting factorization-based discrete wavelet transform architecture design. IEEE Trans. on Circuits and Systems for Video Technology, vol. 11, no. 5, pp. 651–657, 2001. [4] Hd.O. Mota and F.H. Vasconcelos and R.M. Silva, Real-time wavelet transform algorithms for the processing of continuous streams of data, Intelligent Signal Processing, 2005 IEEE International Workshop on, 1–3 Sept. 2005, pp. 346–351. ISBN 0-7803-9030-x [5] J.H. Nealand and A.B. Bradley and M. Lech, Overlap-Save Convolution Applied to Wavelet Analysis. IEEE Signal Processing Letters, vol. 10, no. 2, pp. 47–49, 2003. [6] P. Rajmic, Exploitation of the wavelet transform and mathematical statistics for separation signals and noise, (in Czech), PhD Thesis, Brno University of Technology, Brno, 2004. [7] G. Strang. and T. Nguyen, Wavelets and Filter Banks. Wellesley Cambridge Press, 1996.

14

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 407-415, 2010, COPYRIGHT 2010 EUDOXUS PRESS, 407LLC

APPROXIMATING COMMON FIXED POINTS BY AN ITERATIVE PROCESS INVOLVING TWO STEPS AND THREE MAPPINGS SAFEER HUSSAIN KHAN Abstract. Agarwal et al [1] introduced an iteration process which involves two steps and one mapping. They proved some results using nearly uniformly k-contractions. On the other hand, Berinde [2] introduced a new class of quasi-contractive type operators on a normed space. First we compare these two types of mappings. We then modify both Agarwal’s process and mappings of Berinde to the case of three mappings keeping the number of steps same. We use this modi…ed process …rst to prove some strong convergence theorems to approximate common …xed points of three quasicontractive operators in normed spaces and then three nearly uniformly k-contractions in uniformly convex Banach spaces. This will generalize corresponding results of Berinde [2], Agarwal et al [1] and unify a number of results.

1. Introduction and Preliminaries Let C be a nonempty convex subset of a normed space E and T : C ! C be a mapping. Let fan g be appropriately chosen sequence in (0; 1). Throughout this paper, N will denote the set of all positive integers, I the identity mapping on C; F (T ) the set of all …xed points of T and F = \3i=1 F (Ti ), the set of common …xed points of the mappings Ti : C ! C; i = 1; 2; 3: The Mann iterative process [5] is de…ned by the sequence fxn g: x1 = x 2 C; xn+1 = (1

(1.1) where f

ng

n ) xn

+

n T xn ;

n2N

is in (0; 1):

2000 Mathematics Subject Classi…cation. 47H10,54H25 . Key words and phrases. Quasi-contractive type operator, Nearly uniformly kcontraction, Iterative process, Common …xed point, Strong convergence. 1

408

2

SAFEER HUSSAIN KHAN

The sequence fxn g de…ned by 8 >

:y = (1 n n ) xn + n T xn ; n 2 N

where f n g and f n g are in (0; 1); is known as the Ishikawa iterative process [4]. Recently, Agarwal et al [1] introduced the following iterative process:

(1.3)

8 >

:y = (1 n n ) xn + n T xn ; n 2 N

where f n g and f n g in (0; 1): Note that neither (1:1) nor (1:2) can be deduced from (1:3). They de…ned nearly uniformly k-contraction as a mapping T : C ! C satisfying kT n x

(1.4)

T n yk

k (kx

yk + an )

where 0 < k < 1 and fan g is a sequence in [0; 1) with an ! 0: Clearly every contraction is nearly uniformly k-contraction. They proved the following strong convergence result. Theorem 1. Let E be a uniformly convex Banach space and let C be its closed and convex subset. Let T : C ! C be a nearlyP uniformly k-contraction with a sequence fan g and F (T ) 6= ? such that 1 n=1 an < 1: De…ne a sequence fxn g in C as: 8 >

:y = (1 n n n ) xn + n T xn ; n 2 N where f n g,f point of T .

ng

are in (0; 1). Then fxn g converges strongly to a …xed

On the other hand, Berinde [2] introduced a new class of quasicontractive type operators on a normed space E satisfying (1.5)

kT x

T yk

kx

yk + L kT x

xk

for any x; y 2 E; 0 < < 1 and L 0: This class of mappings is larger than not only contractions but also Kannan mappings and Zam…rescu operarors. For details, see [2]. The following comparison of the de…ntions (1:4) and (1:5) shows that Theroem 1 does not cover the above type of operators.

409

TWO STEPS THREE MAPPINGS ITERATIVE PROCESS

3

Proposition 1. (1:4) does not imply (1:5) in general. However, if T is identity mapping or x is a …xed point of T or L = 0; then we must choose an identically zero. Proof. From (1:4) ; kT n x

T n yk =

T T n 1x

T T n 1y

T n 1x

T n 1y + L T nx

T n 2x +L T n x 2

=

T n 2y + L T n 1x

T n 2x

T n 1x

T n 2x

T n 2y + L T n 1x

+L T n x 3

T n 1x

T n 2x

T n 1x

T n 3x

T n 3y +

+ L T n 1x .. .

2

L T n 2x

T n 2x + L T nx

T n 3x T n 1x

n 1

n

kx

kT x xk + n 2 kT 2 x T xk + + kT n 1 x T n 2 xk + kT n x T n 1 xk

yk + L

That is, (1.6) kT n x

n

T n yk

n 1

kx

yk+L

kT x xk + n 2 kT 2 x T xk + + kT n 1 x T n 2 xk + kT n x T n 1 xk

But T 2x

= kT (T x) T xk kT x xk + L kx = ( + L) kx T xk ;

Tx

T xk

and T 3x

T 2x

T T 2x

=

T 2x

T (T x) T x + L T 2x 2

= ( + L) T x 2

( + L) kx

Tx T xk

and so on , we get T nx

T n 1x

( + L)n

1

kx

T xk

Tx

410

4

SAFEER HUSSAIN KHAN

so that n 1

kT x xk + n 2 kT 2 x T xk + + kT n 1 x T n 2 xk + kT n x T n 1 xk

( + L) + n 3 ( + L)2 + kx + ( + L)n 2 + ( + L)n 1 ! 2 1 + +L + +L + kx T xk n 2 n 1 + +L + +L ! 2 1+ 1+ L + 1+ L + kx T xk n 2 n 1 + 1+ L + 1+ L ! n 1 1+ L kx T xk 1 1+ L

n 1

=

n 1

=

n 1

=

n 1

n

=

L

n 2

+

1+

T xk

n

L

1 kx

T xk :

Hence (1:6) becomes n

kT n x

n

T n yk =

n

=

n

kx kx

yk + L yk +

kx kx

n

yk + yk +

L

1+

L

1+ 1+ 1+

1 kx

T xk

n

L

1 kx

T xk

n

L

L

n

1 kx

T xk

n

1 kx

T xk :

Choose k = : De…ne an =

1+

L

n

1 kx

T xk :

If T is identity mapping or x is a …xed point of T or L = 0, then an is n identically zero. If L 6= 0; then 1 + L 2 (1; 1) and so 1 + L diverges. Hence an 9 0 as n ! 1. Consequently, any mapping satisfying (1:4) does not satisfy (1:5) in general. Berinde [2] used the Ishikawa iterative process (1:2) to approximate …xed points of the class of operators (1:5) in a normed space. Actually, his main theorem was the following:

411

TWO STEPS THREE MAPPINGS ITERATIVE PROCESS

5

Theorem 2. Let C be a nonempty closed convex subset of a normed space E: Let T : C ! C be an operator satisfying (1:5). Let P1fxn g be de…ned by the iterative process (1:2) : If F (T ) 6= ? and n=1 n = 1; then fxn g converges strongly to a …xed point of T . Now note that although the iterative process used in Theroem 1 is better than the one used in Theorem 2 (see [1]) but it does not cover the type of operators used in Theorem 2:We, thus, need a modi…cation for both of these. Keeping in mind that approximating common …xed points has a direct link with the minimization problem, see for example [6], we modify both (1:3) and (1:5) to the case of three mappings T1 ; T2 and T3 as follows. 8 >

:y = (1 n n ) xn + n T3 xn ; n 2 N where f

(1.8)

ng

and f

ng

max kTi x

i2f1;2;3g

are in (0; 1) and Ti yk

kx

yk + L max kTi x i2f1;2;3g

xk

for any x; y 2 E; 0 < < 1 and L 0: We note that (1:7) reduces to (1:3) when Ti = T for all i = 1; 2; 3; (1:2) when T1 = I; T2 = T3 = T; (1:1) when T1 = T3 = I; T2 = T: We also note that (1:8) reduces to (1:5) in any of the following cases: when Ti = T for all i = 1; 2; 3; when any two of Ti ; i = 1; 2; 3 equal T and the third one is I; when two of Ti ; i = 1; 2; 3 are identity but the third one is T: In the rest of the paper, we use the iterative process (1:7) where the mappings satisfy (1:8) to prove a common-…xed-point-result in normed spaces. A corollary to this result will cover the case of operators de…ned by (1:5) using (1:3): We also obtain generalizations of some results of [2]. Moreover, we use a variant of (1:7) to prove a result for nearly uniformly k-contractions in Banach spaces thereby generalizing a result of Agarwal et al [1]. 2. Common fixed points by a two steps three mappings process 2.1. Results in normed spaces. Our …rst theorem deals with the iterative process (1:7) for the mappings de…ned in (1:8):

412

6

SAFEER HUSSAIN KHAN

Theorem 3. Let C be a nonempty closed convex subset of a normed space E: Let Ti : C ! C; i = 1; 2; 3 be three operators satisfying (1:8) and F 6= ?: Let fxn g be de…ned by the iterative process (1:7): If f n g P and f n g are sequences in (0; 1) such that 1 n=1 n n = 1; then fxn g converges strongly to a point of F: Proof. Let w 2 F: Then (2.1)

kxn+1

wk = k(1 (1

Since kT2 yn wk y = yn ; (1:8) gives

n )T1 xn

+

wk wk + n kT2 yn

n ) kT1 xn

maxi2f1;2;3g kTi yn

(2.2)

n T2 yn

kT2 yn

wk ; therefore for x = w and kyn

wk

wk :

wk

Simillarly, the choice x = w and y = xn provides (2.3)

kT3 xn

kxn

wk

wk :

But kyn

wk

(2.4)

(1 (1 (1

n ) kxn

wk + n kT3 xn wk wk + n kxn wk n ) kxn )) kxn wk : n (1

Then using of (2:1) through (2:4) ; we obtain kxn+1

wk

(1 (1 = [(1 = [(1 = [(1

n ) kT1 xn

kxn n) + n n+ n n n (1

n)

wk + n kT2 yn wk wk + n (1 )) kxn n (1 (1 ))] kxn wk n (1 )] kxn wk n n (1 )] kxn wk

By induction, kxn+1

wk

n Y

= kx1 = kx1 for all n 2 N.

[1

(1

)

k

k=1

wk exp

k ] kx1

n X

(1

wk )

k

k

k=1

wk exp

(1

)

n X k=1

k

k

!

!

wk

413

TWO STEPS THREE MAPPINGS ITERATIVE PROCESS

Since 0


:y = (1 n n n ) xn + n T3 xn ; n 2 N

and f

ng

are in (0; 1):

Theorem 5. Let C be a nonempty closed convex subset of a normed space E: Let Ti : C ! C; i = 1; 2; 3 be three nearlyPuniformly kcontractions with a sequence fan g and F 6= ? such that 1 n=1 an < 1: Let fxn g be de…ned by the iterative process (2:5): If f n g and f n g are sequences in (0; 1); then fxn g converges strongly to a common …xed point of Ti ; i = 1; 2; 3: Proof. Let w 2 F: Then kxn+1

n n wk = k(1 wk n )T1 xn + n T2 yn n (1 wk + n kT2n yn wk n ) kT1 xn (1 wk + an ) + n k (kyn wk + an ) n )k (kxn = k [(1 wk + n kyn wk + an ] n ) kxn (1 wk + n (1 wk n ) kxn n ) kxn k n + n n kT3 xn wk + an

k = k

(1

n ) kxn

+k (1

n

n n

+

wk + n (1 kxn wk + k (1 +k

n)

n

n

+k n a n n + an

k [(1 (1 k) n n ) kxn k kxn wk + k(k + 1)an kxn wk + k(k + 1)an

n ) kxn

n n an n

wk

+ an

kxn

wk

wk + (k + 1)an ]

It is well-known that if frn g and fsn g are P sequences of nonnegative real numbers such that rn+1 rn + sn and 1 n=1 sn < 1, then lim rn exists. Thus lim kxn wk exists. Call it c: If c > 0, then an ! 0 together with kxn+1 wk k kxn wk + k(k + 1)an gives c kc; a contradiction. Hence lim kxn wk = 0 and fxn g converges strongly to a common …xed point of Ti ; i = 1; 2; 3 as required.

415

TWO STEPS THREE MAPPINGS ITERATIVE PROCESS

9

Remark. In the above theorem, if T1 is a nearly uniformly k1 -contraction with a sequence fa1n g ; T2 is a nearly uniformly k2 -contraction with a sequence fa2n g and T3 is a nearly uniformly k3 -contraction with a sequence fa3n g ; then we can choose an = min(a1n ; a2n ; a3n ) and k = min(k1; k2; k3 ) so that our result still remains valid. Following is the Theorem 3.7 of [1] which we can obtain now by choosing T1 = T2 = T3 = T in Theorem 5: Corollary 3. Let E be a uniformly convex Banach space and let C be its closed and convex subset. Let T : C ! C be a nearlyP uniformly k-contraction with a sequence fan g and F (T ) 6= ? such that 1 n=1 an < 1: De…ne a sequence fxn g in C as in (1:3) where f n g; f n g are in (0; 1): Then fxn g converges strongly to a …xed point of T: Acknowledgement. The author gratefully acknowledges the support from Qatar University, Qatar to carry out this work. References [1] R.P.Agarwal, D. O’Regan and D.R. Sahu, Iterative construction of …xed points of nearly asymptotically nonexpansive mappigs, J. Nonlinear Convex Anal., 8(1)2007, 61-79. [2] V. Berinde, A convergence theorem for some mean value …xed point iterations procedures, Dem.Math., 38(1)2005, 177-184. [3] — — — — ,On the convergence of Ishikawa iteration in the class of quasi contractive operators, Acta.Math.Univ.Comenianae, LXXIII (1) 2004, 119-126. [4] S.Ishikawa, Fixed points by a new iteration method, Proc.Amer.Math.Soc., 44 (1974), 147-150. [5] W.R. Mann, Mean value methods in iterations, Proc.Amer.Math.Soc., 4 (1953), 506-510. [6] W. Takahashi, Iterative methods for approximation of …xed points and their applications, J.Oper.Res.Soc. Jpn., 43(1) (2000), 87 -108. Safeer Hussain Khan, Department of Mathematics and Physics, Qatar University, Doha 2713, State of Qatar. E-mail address: [email protected]; [email protected]

JOURNAL 416 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 416-425, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

LAWTON’S CONDITIONS ON REGULAR LOW PASS FILTERS A. San Antol´ın Department of Mathematics and Statistics, Auburn University, Auburn, Al., USA, 36849. E-mail: [email protected] Abstract n

For the study of Z -periodic bounded measurable functions H which are low pass filters in an multiresolution analysis defined on L2 (Rn ) with a dilation given by a fixed linear invertible map A : Rn → Rn such that A(Zn ) ⊂ Zn and all (complex) eigenvalues of A have Q modulus greater ∗ −j than 1, one should assume that the infinite product ∞ t)| j=1 |H((A ) converges almost everywhere on Rn and is A∗ -locally nonzero at the origin, where A∗ is the adjoint map of A. In this paper we find a condition on the regularity of H at the origin which assures that the above requirements on the infinite product hold. Moreover, depending of the regularity we assume on H we get different necessary and sufficient conditions on H to be a low pass filter in an A-MRA following the strategy of Lawton.

Keywords: Fourier transform, H¨older continuous function, Lawton’s conditions, locally nonzero function, low pass filter in a multiresolution analysis.

1

Introduction and Definitions.

A multiresolution analysis (MRA) is a general method introduced by Mallat [21] and Meyer [22] for constructing wavelets. Afterwards, the concept of MRA was considered on L2 (Rn ), n ≥ 1, (see [20],[11],[26],[27]) in a more general context, where instead of the dyadic dilation one considers the dilation given by a fixed linear invertible map A : Rn → Rn such that A(Zn ) ⊂ Zn and all (complex) eigenvalues of A have modulus greater than 1. Here and further we use the same notation for the linear invertible map A and its matrix with respect to the canonical base. Given such a linear invertible map A one defines an A-MRA as a sequence of closed subspaces Vj , j ∈ Z, of the Hilbert space L2 (Rn ) that satisfies the following conditions: (i) ∀j ∈ Z,

Vj ⊂ Vj+1 ;

(ii) ∀j ∈ Z,

f (x) ∈ Vj ⇔ f (Ax) ∈ Vj+1 ;

1

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

417

2

(iii) ∪j∈Z Vj = L2 (Rn ); (iv) There exists a function φ ∈ V0 , that is called scaling function, such that { φ(x − k) : k ∈ Zn } is an orthonormal basis for V0 . Properties of scaling functions have been studied by several authors (see [21],[15],[9],[4],[11],[1],[8],[14],[19],[5]). In this paper, we adopt the convention that the Fourier transform of a function f ∈ L1 (Rn ) ∩ L2 (Rn ) is defined by Z b f (y) = f (x)e−2πix·y dx. Rn

−1 If φ is a scaling function of an A-MRA, observe that d−1 x) ∈ V−1 ⊂ A φ(A V0 , where dA = |detA|. By the condition (iv) we express this function in terms of the orthonormal basis {φ(x − k) : k ∈ Zn } as X −1 d−1 φ(A x) = ak φ(x − k), A k∈Zn

where the convergence is in L2 (Rn ) and {ak }k∈Zn ∈ l2 . Taking the Fourier transform, we obtain b ∗ t) = H(t)φ(t) b φ(A

a.e. on Rn

where A∗ is the adjoint map of A and X H(t) = ak e−2πik·t k∈Zn

is a Zn -periodic function which is called low pass filter associated with the scaling function φ, or shortly low pass filter. We study the problem of when a given measurable function H is a low pass filter in an A-MRA assuming some regularity on H. Before formulating our results let us introduce some notation and definitions. Let {ei }ni=1 be the natural basis of Rn , Tn = Rn /Zn and if we set f ∈ 2 L (Tn ) we will understand that f is defined on the whole space Rn as a Zn periodic function. With some abuse of the notation we consider also that Tn is the unit cube [0, 1)n . We will denote Br = {x ∈ Rn : |x| < r}. For a set E ⊂ Rn and a point x ∈ Rn we will write x + E = {x + y : for y ∈ E}. The Lebesgue measure of a measurable set E ⊂ Rn will be denoted by |E|n and by χE the characteristic function of the set E i.e. χE (t) takes the value 1 if t ∈ E and 0 otherwise. Given N ∈ {1, 2, ...}, the set of N times differentiable functions f : Rn → C will be denoted by C N (Rn ). We will say that a measurable function f : Rn → R is H¨ older continuous at x0 ∈ Rn (cf. [24]) if there exist an open neighborhood of x0 , U ⊂ Rn , and constants C, α > 0 such that |f (y) − f (x0 )| ≤ C|y − x0 |α ,

∀y ∈ U.

(1)

418

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

3

If α=1, f is said to be Lipschitz continuous at x0 . In [5] the following definitions were introduced. Definition 1. We will say that x ∈ Rn is a point of A-density for a set E ⊂ Rn , |E|n > 0, if for any r > 0 |E ∩ (A−j Br + x)|n = 1. j→∞ |A−j Br |n lim

Definition 2. Let f : Rn −→ C be a measurable function. We say that x ∈ Rn is a point of A-approximate continuity of the function f if there exists E ⊂ Rn , |E|n > 0, such that x is a point of A-density for the set E and lim

f (y) = f (x).

y→x y∈E

Definition 3. A measurable function f : Rn → C is said to be A-locally nonzero at a point x ∈ Rn if for any ε, r > 0 there exists j ∈ N such that | { y ∈ A−j Br + x : f (y) = 0 } |n < ε|A−j Br |n . For a given φ ∈ L2 (Rn ), set Φφ (t) =

X

b + k)|2 . |φ(t

(2)

k∈Zn

If A : Rn → Rn is a linear invertible map such that A(Zn ) ⊂ Zn and all (complex) eigenvalues of A have modulus greater than 1, the quotient group Zn /A(Zn ) is well defined, then we will denote by ∆A ⊂ Zn a full collection of representatives of the cosets of Zn /A(Zn ). Recall that there are exactly dA cosets (see [11] and [27, p. 109]). dA −1 Let us fix ∆A∗ = {pi }i=0 , where p0 = 0. ∗ Since A is a linear invertible map such that all (complex) eigenvalues of A∗ have modulus greater than 1 (cf. [27, p. 122], [2]) there exist K > 0 and 0 < β < 1 such that |(A∗ )−j t| ≤ Kβ j |t|

∀j ∈ {1, 2, ...}.

(3)

Given H ∈ L∞ (Tn ) the following continuous linear operator P : L1 (Tn ) → L1 (Tn ): P f (t) =

dX A −1

|H((A∗ )−1 (t + pi ))|2 f ((A∗ )−1 (t + pi ))

i=0

is well defined. This operator was first introduced by M. Bownik [2] as a generalization of the analogous operator introduced by W. Lawton [18] for dyadic dilations.

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

2

419

4

History References.

A. Cohen [6] gave the first necessary and sufficient conditions for a trigonometric polynomial H to be a low pass filter of an MRA on L2 (R). Those conditions were extended for differentiable functions by E. Hern´andez and G. Weiss [14] and for the H¨ older continuous functions by R. F. Gundy [12]. Furthermore, Cohen’s approach was studied by M. Papadakis, H. Siki´c and G. Weiss [24] to low pass filters that are H¨ older continuous at the origin. At the same time as Cohen’s condition appeared, W. Lawton [17] gave another sufficient condition of a different nature when H is a trigonometric polynomial. The necessity of Lawton’s condition was settled in 1990 by both A. Cohen (see [7]) and W. Lawton [18], independently, (see [9, p. 182–193]). For our general case when an MRA is defined on L2 (Rn ), n ≥ 1, and for dilations given by a map A as above described, a generalization of Cohen’s conditions for low pass filters associated with characteristic scaling functions was proved by K. Gr¨ ochening and W. R. Madych [11] and by W. R. Madych [20]. Afterwards, a generalization of Cohen’s and Lawton’s conditions were obtained by M. Bownik [2] where the results were presented with more general assumptions about the regularity of low pass filters. Other necessary and sufficient conditions on trigonometric polynomial low pass filters appeared in the paper by J. C. Lagarias and Y. Wang [16]. The problem of characterization of low pass filters of an MRA was posed in the book by E. Hern´ andez and G. Weiss [14]. Characterizations of low pass filters for an MRA on L2 (R) and the dyadic dilations are already known, see the papers by M. Papadakis, H. Siki´c and G. Weiss [24] and by V. Dobri´c, R. F. Gundy and P. Hitczenko [10]. Afterwards, R. F. Gundy [13] addressed the same question when the condition (iv) in the definition of MRA is relaxed by assuming that {φ(x − k) : k ∈ Z} is a Riesz basis for V0 . The author [25] proved another necessary and sufficient condition on low pass filters following the strategy of Lawton. In fact, that condition was even presented on low pass filters H in an A-MRA defined on L2 (Rn ). Such a condition is written below. Let class of all functions H ∈ L∞ (Tn ) such that the infinite prodQ∞HA be the ∗ −j uct j=1 |H((A ) t)| converges almost everywhere on Rn and is A∗ -locally nonzero at the origin. Moreover, let ΠA be the class of all measurable functions on Rn such that f (0) = 1, 0 ≤ f (t) ≤ 1 a.e. on Rn and the origin is a point of A∗ -approximate continuity of f . Theorem A. Let H ∈ HA . Then the following conditions are equivalent: A) The function |H| is a low pass filter associated with a scaling function b := Q∞ |H((A∗ )−j t)|. θ of an A-MRA where θ(t) j=1 n T 1 B) The only function f ∈ L (T ) ΠA invariant under the operator P is the function f ≡ 1. To give a complete characterization of all low pass filters associated with scaling functions, we need also the following remark done in [25].

420

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

5

Remark A. A measurable function H is a low pass filter of an A-MRA if and only if |H| is a low pass filter of some A-MRA. Observe that it is not always easy to check whether a function H ∈ L∞ (Tn ) belongs to the class HA . In this paper we give a condition on the regularity at the origin of the function H which assures that H ∈ HA . Moreover, depending on the regularity we assume on H we prove other necessary and sufficient conditions on H to be a low pass filter in an A-MRA following the strategy of Lawton. Those conditions do not appear in the literature and are new even for low pass filters in an MRA defined on L2 (R) with the dyadic dilation.

3

Main Results

We prove the following results. Lemma 1. Let H ∈ L∞ (Tn ) be a function such that |H(0)| = 1, |H| is H¨ older continuous at the origin and dX A −1

|H(t + (A∗ )−1 pi )|2 = 1

a.e. on Rn ,

(4)

i=0

then H ∈ HA . In order not to repeat conditions let us introduce the two following classes of measurable functions. ΥA = {f ∈ L1 (Tn ) : f (0) = 1,

f is continuous at the origin

and 0 ≤ f (t) ≤ 1, a.e. on Rn }. If a function f ∈ ΥA is also differentiable at the origin, we will say that f belongs to the class ΛA . Theorem 1. Let H be a measurable function such that |H(0)| = 1, |H| is Zn periodic continuous and H¨ older continuous at the origin. Then the following conditions are equivalent: I) The function H is a low pass filter in an A-MRA. II) The only function f ∈ ΥA invariant under the operator P is the function f ≡ 1. Theorem 2. Let H be a measurable function such that |H(0)| = 1 and |H| is a Zn -periodic differentiable function. Then the following conditions are equivalent: 1) The function H is a low pass filter in an A-MRA. 2) The only function f ∈ ΛA invariant under the operator P is the function f ≡ 1.

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

4

421

6

Proof of Lemma 1

Proof of Lemma 1. First of all, we prove that the infinite product ∞ Y

|H((A∗ )−j t)|

(5)

j=1

converges almost everywhere on Rn to a well defined measurable function. According to the condition (4) there exists a measurable set E ⊂ Rn , |E|n = 0, such that 0 ≤ |H(t)| ≤ 1 for every t ∈ Rn \ E. Let F = ∗ k ∪∞ k=−∞ (A ) E. ∗ −k Given ε > 0, we set two positive numbers r, R > 0 such that ∪∞ Br ⊂ k=0 (A ) BR ⊂ U and ∞ X CK α Rα β jα < ε, j=1

where U is the open neighborhood in the definition of H¨older continuous at the origin and C, α, K and β are the corresponding constants in the inequalities (1) ∗ −k and (3). Let S = ∪∞ Br . k=0 (A ) If t ∈ S \ F , then for every j ∈ {1, 2, ...} 0 ≤ 1 − |H((A∗ )−j t)| ≤ CK α β jα |t|α ≤ CK α β jα Rα . Thus, for every J ∈ {2, 3, ...} 0 ≤ 1−

J Y

|H((A∗ )−j t)|

j=1

≤ 1−

J Y

|H((A∗ )−j t)| +

j=2

≤ CK α Rα

J Y

|H((A∗ )−j t)|1 − |H((A∗ )−1 t)||

j=2 J X

β jα ≤ CK α Rα

j=1

∞ X

β jα < ε.

j=1

Letting J → ∞ we obtain 1−ε≤

∞ Y

|H((A∗ )−j t)| ≤ 1,

∀t ∈ S \ F.

(6)

j=1

Furthermore, given t ∈ Rn \ F there exits N ∈ {1, 2, ...} such that (A∗ )−N t ∈ S \ F , then N ∞ Y Y |H((A∗ )−j t)| |H((A∗ )−j ((A∗ )−N t))| j=1

j=1

converges. So, we conclude that the infinite product (5) converges a.e. on Rn to a well defined measurable function. Finally, by (6), θb is A∗ -locally nonzero at the origin.

422

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

5

7

Proof of Theorem 1 and Theorem 2.

In the proof of Theorems 1 and 2 we will need the following results. The following characterization of scaling functions in a multiresolution analysis was given in [5]. Theorem B. Let φ ∈ L2 (Rn ). Then the following conditions are equivalent: (A) The function φ is a scaling function of an A-MRA; (B) (α) The function φb is A∗ -locally nonzero at the origin; (β) Φφ (t) = 1 a.e. on Tn ; (γ) There exists a Zn -periodic function, H ∈ L∞ (Tn ), |H(t)| ≤ 1 a.e. on Rn , such that b ∗ t) = H(t)φ(t) b φ(A

a.e. on Rn ;

b (C) (α∗ ) Setting |φ(0)| = 1, the origin is a point of A∗ -approximate conb and the conditions (β) and (γ) hold. tinuity of |φ|; PropositionQA. Let H ∈ L∞ (Tn ) be a function such that (4) holds. If the ∞ infinite product j=1 |H((A∗ )−j t)| converges almost everywhere then b := Q∞ |H((A∗ )−j t)| belongs to L2 (Rn ) and a) the function θ(t) j=1 k θb kL2 (Rn ) ≤ 1; b) Φθ (t) ≤ 1 a.e. on Rn ; c) Φθ is a fix point for the operator P , b := Q∞ |H((A∗ )−j t)|. where the function θ is defined by θ(t) j=1 In the above proposition, the condition a) was proved by M. Bownik [2] (cf. [9],[14]), the condition b) was proved in the proof of main result in [25] and the condition c) also was proved in [2]. Remark B. If in Proposition A we add the hypotheses: |H(0)| = 1 and |H| is a Zn -periodic continuous function and also is H¨older continuous at the origin, b then the function θb is continuous and θ(0) = 1. The following result was proved by M. Bownik [2]. Proposition B. Assume that a Zn -periodic function H satisfying (4) and |H(0)| = 1 is of class C N (Rn ) for some N = 1, 2, ..... Then the function b = Q∞ |H((A∗ )−j t)| is also of class C N (Rn ) and θ(0) b θ(t) = 1. j=1 Proof of Theorem 1. First of all, we will prove the implication I) =⇒ II). According to Remark A, since H is a low pass filter in an A-MRA then |H| is a T low pass filter in some A-MRA. Thus, because ΥA ⊂ L1 (Rn ) ΠA we finish the proof applying the condition B) in Theorem A. Let us prove the implication II) =⇒ I). According to Lemma 1 the infinite product ∞ Y b = θ(t) |H((A∗ )−j t)| j=1

LAWTON'S CONDITIONS ON FILTERS

423

A. San Antol´ın

8

converges almost everywhere on Rn to a well defined A∗ -locally nonzero measurable function. Observe that θb ∈ L2 (Rn ) by the condition a) in Proposition A, b ∗ t) = |H(t)|θ(t) b a.e. on Rn . and in addition, θ(A Let θ be the function defined by θb and consider the function Φθ given by (2), then the condition c) in Proposition A tells us that Φθ is a fix point for the operator P . If we prove that the function Φθ belongs to ΥA , then by the condition II) in Theorem 1 we will have that Φθ (t) = 1 a.e. on Tn . Hence according to Theorem B the function θ is a scaling function of an A-MRA with associated low pass filter |H|. Obviously, Φθ is a Zn -periodic function and 0 ≤ Φθ (t). We do not write “a.e.” in the above inequality because according to Remark B, the function θb is a continuous. So, by the same reason and due to the condition c) in Proposition A, Φθ (t) ≤ 1 holds. Moreover, since θb is a continuous function and b b ≤ Φθ (t) ≤ 1 yield that Φθ (0) = 1 and the origin θ(0) = 1, the inequalities θ(t) is a point of continuity of Φθ . Therefore, Φθ ∈ ΥA . Finally, applying Remark A the proof of Theorem 1 will be finished. Proof of Theorem 2. In an analogous way that the proof of the implication I) =⇒ II) in Theorem 1 we can prove the implication 1) =⇒ 2) in Theorem 2. b = Q∞ |H((A∗ )−j t)| and reTo prove the implication 2) =⇒ 1), let θ(t) j=1 peating the schema of the proof of II) =⇒ I) in Theorem 1, it is enough if we prove that Φθ ∈ ΛA . From that proof we know that Φθ ∈ ΥA , then it remains to prove that Φθ is differentiable at the origin. Let us check that the partial derivatives of Φθ at the origin exist and are b zero. According to Proposition B the function θb is differentiable and θ(0) = 1. n b Thus using the inequalities θ(t) ≤ Φθ (t) ≤ 1 for every t ∈ R (see the proof of Theorem 1) we obtain that Φθ (0) = 1 and also lim sup | h→0

2 b b + hei ))2 Φθ (0) − Φθ (0 + hei ) (θ(0)) − (θ(0 | ≤ lim sup = 0, h |h| h→0

b 2 is differentiable and it takes where the equality is true due to the function (θ) a maximum value at the origin. Furthermore, lim sup t→0

2 2 b b |Φθ (0) − Φθ (t)| (θ(0)) − (θ(t)) ≤ lim sup = 0. |t| |t| t→0

Therefore, Φθ is a differentiable function at the origin.

References [1] C.Boor, R.DeVore, A.Ron; On the construction (pre)wavelets, Constr. Approx. 9,123–166(1993).

of

multivariate

424

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

9

[2] M.Bownik; Tight frames of multidimensional wavelets, Dedicated to the memory of Richard J. Duffin. J. Fourier Anal. Appl. 3, no. 5,525–542(1997). [3] A.Bruckner; Differentiation of real functions, Lecture Notes in Mathematics, 659, Springer, Berlin, 1978. [4] C.K.Chui; An Introduction to Wavelets, Academic Press, Inc. 1992. [5] P.Cifuentes, K.S.Kazarian, A.San Antol´ın; Characterization of scaling functions in a multiresolution analysis, Proc. Amer. Math. Soc. 133, No. 4,10131023(2005). [6] A.Cohen; Ondelettes, analyses multir´esolutions et filtres miroirs en quadrature, Ann. Inst. H. Poincar´e, Anal. non lin´eaire 7, no. 5,439–459(1990). [7] A.Cohen, I.Daubechies, J.C.Feauveau; Biorthogonal bases of compactly supported wavelets, Comm. Pure Appl. Math. 45, no. 5,485–560(1992). [8] S.Dahlke, W.Dahmen and V.Latour; Smooth refinable functions and wavelets obtained by convolution products. Appl. Comput. Harmon. Anal. 2, no. 1,68–84(1995). [9] I.Daubechies; Ten lectures on wavelets, SIAM, Philadelphia, 1992. [10] V.Dobri´c, R.F.Gundy, P.Hitczenko; Characterizations of orthonormal scale functions: a probabilistic approach, J. Geom. Anal. 10, no. 3,417–434(2000). [11] K.Gr¨ ochening, W.R.Madych; Multiresolution analysis, Haar bases and selfsimilar tillings of Rn , IEEE Trans. Inform. Theory, 38(2),556–568(1992). [12] R.F.Gundy; Two remarks concerning wavelets: Cohen’s criterion for lowpass filters and Meyer’s theorem on linear independence The functional and harmonic analysis of wavelets and frames (San Antonio, TX, 1999), 249– 258, Contemp. Math., 247, Amer. Math. Soc., Providence, RI, 1999. [13] R.F.Gundy; Low-pass filters, martingales, and multiresolution analyses, Appl. Comput. Harmon. Anal. 9, no. 2, 204–219(2000). [14] E.Hern´ andez and G.Weiss; A first course on Wavelets, CRC Press, Inc. 1996. [15] R.Q.Jia and C.A.Micchelli; Using the refinement equations for the construction of pre-wavelets. II. Powers of two. Curves and surfaces (ChamonixMont-Blanc, 1990), 209–246, Academic Press, Boston, MA, 1991. [16] J.C.Lagarias, Y.Wang; Orthogonality criteria for compactly supported refinable functions and refinable function vectors, J. Fourier Anal. Appl. 6, no. 2,153–170(2000). [17] W.M.Lawton; Tight frames of compactly supported affine wavelets, J. Math. Phys. 31, no. 8,1898–1901(1990).

LAWTON'S CONDITIONS ON FILTERS

A. San Antol´ın

425

10

[18] W.M.Lawton; Necessary and sufficient conditions for constructing orthonormal wavelet bases, J. Math. Phys. 32, no. 1,57–61(1991). [19] R.A.Lorentz, W.R.Madych, A.Sahakian; Translation and dilation invariant subspaces of L2 (R) and multiresolution analyses, Applied and Computational Harmonic Analysis 5, no. 4,375–388(1998). [20] W.R.Madych; Some elementary properties of multiresolution analyses of L2 (Rd ), Wavelets - a tutorial in theory and applications, Ch. Chui ed., Academic Press,259–294(1992). [21] S.Mallat; Multiresolution approximations and wavelet orthonormal bases for L2 (R), Trans. of Amer. Math. Soc., 315,69–87(1989). [22] Y.Meyer; Ondelettes et op´ erateurs. I, Hermann, Paris (1990) [ English Translation: Wavelets and operators, Cambridge University Press, (1992).] [23] I.P.Nathanson; Theory of functions of a real variable, London, vol. I, 1960. [24] M.Papadakis, H.Siki´c, G.Weiss; The characterization of low pass filters and some basic properties of wavelets, scaling functions and related concepts, J. Fourier Anal. Appl. 5, no. 5,495–521(1999). [25] A.San Antol´ın; Characterization of low pass filters in a multiresolution analysis (to appear in “Studia Mathematica”) [26] R.Strichartz; Construction of orthonormal wavelets, Wavelets: mathematics and applications, Stud. Adv. Math., CRC, Boca Raton, FL,23–50(1994). [27] P.Wojtaszczyk; A mathematical introduction to wavelets, London Mathematical Society, Student Texts 37, 1997.

JOURNAL 426 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 426-438, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Strip-saturation Model Solution for Piezoelectric Strip ∼ by Quadratically Varying Electric Displacement 1

R. R. Bhargava and 2 Amit Setia Department of Mathematics Indian Institute of Technology Roorkee Roorkee, 247667, India 1 e-mail: [email protected], 2 [email protected] Abstract A crack arrest model is proposed for a cracked poled infinitely long and composite narrow piezoceramic strip. The finite crack is symmetrically situated and oriented longitudinally with respect to the edges of the strip. Uniform anti-plane shear stress or strains and in-plane normal electrical displacement applied on the finite distant edges of the strip. Consequently strip yields both mechanically and electrically. Under the assumption that the strip is electrically more brittle, an electrical singularity is encountered first. It is at this level the investigations are carried. To stop the crack from further electrical polarization the rims of the developed saturation zones are prescribed quadratically varying, normal, cohesive, in-plane saturation limit electrical displacement. The load required to arrest the developed saturation zone is assessed. In-plane electrical crack opening displacement, electrical crack growth rate’s expression are obtained. Case study has been presented for BaTiO3 , PZT-4 and PZT-5H strips. Keywords: Piezoelectric strip, strip-saturation model, crack opening displacement, crack growth rate, saturation zone

1

INTRODUCTION

Gao and group [8, 7] established that local energy release rate for a piezoelectric crack with electrical yielding confined to a strip in front of the crack. It is independent of the yielding parameters and can be fully determined from a linear piezoelectric crack analysis. They further investigated the effects of electrical yielding on a finite crack lying perpendicular or parallel to the poling axis of an infinite poled piezoelectric ceramic medium. A crack perpendicular to the poling axes in a general poled ferroelectric is discussed by Ru [4] for the implications of the strip-saturation model for a electric field inducing crack. He [5] also conducted the studies for mixed boundary value problem and obtained 1

BHARGAVA-SETIA: STRIP-SATURATION MODEL...

427

near crack tip field for a conducting crack parallel/perpendicular to the poling axis using based on a strip-saturation model. Wang and Zhang [1] discussed an electric strip-saturation model for fracture prediction of piezoceramics containing electrically impermeable cracks. Wang and Mai [2] investigated the fracture behavior of a cracked piezoceramic medium under transient electromechanical loads. The work on cracked piezoelectric strip was started by Shindo et al. [12]. They used the theory of linear piezoelectricity to solve the electroelastic problems of a finite crack in an orthotropic piezoelectric strip. Fourier integral transform technique was used to reduce the problem to solve a pair of dual integral equations. They [13] extended the work to study the singular stress and electric field in an orthotropic piezoelectric ceramic strip containing a Griffith crack under longitudinal shear. Li [3] analyzed the problem of a finite crack in a functionally graded material strip under an antiplane mechanical and inplane electrical loading. In this case elastic stiffness, piezoelectric constants, and dielectric permittivity were taken to vary along the thickness of the strip. Li [9] examined the strip-saturation model for piezoelectric crack in permeable environment to analyze fracture toughness of a piezoelectric ceramics. In this study a permeable crack was modeled as a vanishing thin but finite rectangular slit with surface charge deposited along a crack surface.

2

METHODOLOGY

As is well-known out-of-plane displacement problem along xoy−plane may be defined as ux (x, y, z) = uy (x, y, z) and uz (x, y, z) = uz (x, y) (1) where ui (i = x, y, z) define the displacement components along x, y and z−directions. Similarly an in-plane electric field problem for xoy−plane is defined as Ex (x, y, z) = Ex (x, y), Ey (x, y, z) = Ey (x, y) and Ez (x, y, z) = 0

(2)

where Ei , (i = x, y, z) denotes the electric field component along z−direction. Consequently linear piezoelectric theory the constitutive equations may be written as σxz = c44 uz,x + e15 φ,x σyz = c44 uz,y + e15 φ,y Dx = e15 uz,x − ²11 φ,x Dy = e15 uz,y − ²11 φ,y

(3) (4) (5) (6)

where σiz , Di (i = x, y) denote the shear stress component, electric displacement component. A comma after function denotes its partial differentiation with respect to the argument following it. c44 , e15 and ²11 denote elastic piezoelectric and dielectric constants respectively. The gradient equations reduce to γiz = uz,i 2

(7)

428

BHARGAVA-SETIA: STRIP-SATURATION MODEL...

Ei = −φ,i

(8)

where i = x, y. Stress equilibrium equation in absence of body forces are given by σij,j = 0

(9)

where i, j = x, y, z. Electrical displacement equation in absence of body electric charge may be written as Di,i = 0 (10) The governing equations are obtained substituting Eqs.(3 to 6) into equilibrium Eqs. (9, 10), which finally, reduce to the solution of ∇2 uz = 0 and ∇2 φ = 0 2

(11)

2

∂ ∂ where ∇2 = ∂x 2 + ∂y 2 is the Laplacian operator. Using Fourier cosine transform solution of Eq. (11) may be written as Z 2 ∞ uz (x, y) = [A1 (α) cosh(αy) + A2 (α) sinh(αy)] cos(αx)dα + ah y (12) π 0

and electric potential, φ is given by Z 2 ∞ φ(x, y) = [B1 (α) cosh(αy) + B2 (α) sinh(αy)] cos(αx)dα − bh y π 0

(13)

where Ai (α), Bi (α) are the arbitrary functions. These are determined using boundary conditions of the problem under investigation. And arbitrary constants ah and bh are obtained using conditions prescribed on the edges of strip. Since the boundary condition are also prescribed on permittivity of the vacuum inside the crack, the constitutive equation for electric displacement components DiV , (i = x, y) reduce to DxV = ²0 Ex

(14)

DyV

(15)

= ²0 Ey

where ²0 is the electrical permittivity of the vacuum and e15 = e31 = e33 = 0. The governing equation for potential for potential φV in vacuum reduce to ∇2 φV = 0

(16)

The solution of which using Fourier transform technique and the condition Dy (x, 0) = DyV (x, 0) may be written as Z 2 ∞ (17) φV (x, y) = C(α) sinh(αy) cos(αx)dα for 0 ≤ x < c π 0 Opening mode electric displacement intensity factor, at the tip x = a, is defined as p KID = lim+ [ 2π(x − a)Dy (x, 0)] (18) x→a

3

BHARGAVA-SETIA: STRIP-SATURATION MODEL...

429

In-plane open mode electrical displacement, Dy (x) is calculated using Z a 2 (19) Dy (x) = M (x, α)KID (α)dα e15 x where

r M (x, α) =

α 1 √ dα π α2 − x2

(20)

taken from ref. [10].

3

THE PROBLEM

h

y

|

h

2 2

mx mx

2

mx

x 2h

-a

c

-c

h

|

a

D h

h

Figure 1: Schematic representation of the problem An infinitely long narrow piezoceramic strip occupies the region −h ≤ y ≤ h and −∞ < x < ∞ in xoy−plane as shown in Figure 1. The strip is assumed to be uniformly thick along z-direction to allow the anti-plane shear stress/strain state. The strip is poled along z−direction. The infinitely distant edges of the strip are stress/strain and charge free. The strip is cut along a hairline straight crack which occupies the interval y = 0 and − c ≤ x ≤ c and oriented longitudinal to the edges of the strip. The rims of the crack are stress and charge free. The edges x = ±h are prescribed uniform anti-plane shear stress τyz (x, ±h) = τh or deformation γyz (x, ±h) = γh together with in-plane normal electrical displacement Dy (x, ±h) = Dh . Under the assumption that strip is electrically more brittle hence an electric singularity is encountered first. Consequently under small-scale electric polarization condition a strip-saturation zone develop ahead of each tip of the crack. Each of the saturation zone occupies the interval y = 0, c ≤ x < a and − a < x ≤ −c, respectively. To arrest further polarization the rims of the developed saturation zones are subjected to cohesive electrical displacement Dy = mx2 , where under small-scale electric saturation m = Ds /c2 , Ds being the saturation limit electrical displacement. Consequently the crack is stopped from further opening. 4

430

BHARGAVA-SETIA: STRIP-SATURATION MODEL...

4

MATHEMATICAL MODEL

A poled piezoceramic strip occupies the interval |y| = h and |x| < ∞ in xoy−plane. The strip is cut along y = 0, |x| ≤ c. Due to the symmetry in the problem only first quadrant region is considered. The conditions prescribed above may be mathematically get translated as (i) On finitely distant edges of the strip y = h, x → ∞ (a) Case I : σyz (x, h) = τh , , Dy (x, h) = Dh (b) Case II : γyz (x, h) = γh , Dy (x, h) = Dh (ii) φ(x, 0) = 0, for c ≤ x < ∞ (iii) Ex (x, 0) = ExV (x, 0), for 0 ≤ x < c (iv) Dy (x, 0) = mx2 H(x − c), for 0 ≤ x < a (v) uz (x, 0) = 0, for a ≤ x < ∞

5

ANALYSIS AND SOLUTION

The general solution of the problem is written using Eqs. (12 and 13). Arbitrary functions and constants are determined using boundary condition (i to iv) as follows.

5.1

Determination of arbitrary constants ah and bh

Substituting from Eqs. (12 and 13) into Eqs. (4, 6) and using each (i) and simplifying one obtains for Case I : ²11 τh − e15 Dh aIh = (21) c44 ²11 + e215 e15 τh − c44 Dh bIh = − (22) c44 ²11 + e215 where superscript I denotes that the quantity refers to Case I. And for Case II analogously using Eqs. (12, 13, 4, 6), boundary condition (i)(b) and calculating, following is obtained aII h = γh Dh − e15 γh bII h = ²11

(23) (24)

Superscript II denotes that the quantities refer to Case II.

5.2

Determination of arbitrary functions Ai (α) and Bi (α), (i = 1, 2)

Remaining of the boundary condition (ii to v), using appropriate constitutive, gradient and Eqs. (12 and 13) yield a set of integral equations to enable to find 5

BHARGAVA-SETIA: STRIP-SATURATION MODEL...

431

Ai (α) and Bi (α) as follows: Boundary condition (ii) together with Eq. (13) leads to integral equation Z ∞ (25) B1 (α) cos(αx)dα = 0; c ≤ x < ∞ 0

Boundary condition (iii) using Eqs. (13, 17, 8), yields the integral equation Z ∞ (26) αB1 (α) sin(αx)dα = 0; 0 ≤ x < c 0

Solving above pair Eqs. (25, 26) of dual integral and introducing B1 (α) as Z πc2 1 p B1 (α) = ξΨ2 (ξ)J0 (cαξ)dξ (27) 2 0 It is obtained that Ψ2 (ξ) = 0 which implies that B1 (α) = 0

(28)

Boundary condition (iv), Eqs. (12, 13, 6, 17, 15) can be simplified to yield Z ∞ 2 − e15 αA1 (α) tanh(αh) cos(αx)dα + d0 = mx2 H(x − c), 0 ≤ x < a (29) π 0 d0 = aih + bih where i = I, II. Boundary condition (v) together with Eq. (12) gives Z ∞ A1 (α) cos(αx)dα = 0; a ≤ x < ∞

(30)

(31)

0

Introducing for the convenience of computations Z πa2 1 p ξΨ1 (ξ)J0 (aαξ)dξ A1 (α) = 2 0

(32)

and solving the pair of dual integral Eqs. (29, 31), one finally obtains after computations a Fredholm integral equation of second kind for determining Ψ1 (ξ) from Z 1 Ψ1 (ξ) + K(ξ, η)Ψ1 (η)dη = 0  c Dh ξ 1/2  ξ<   e15 , a ! Ã µ ³ c/a ´ 1 ³ c/a ´¶ c 2 5/2 1/2 ma ξ 2 D ξ h  1 − arcsin + sin 2 arcsin , 0 we get |(F fn )(t)|pn /qn → 1 and (||F ||pn )pn /qn → 1. Therefore |gn (t)| → 1,

(20)

for all t except possibly finite many. As a result gn (t) → g(t),

(21)

almost everywhere, |g(t)| = 1 and therefore g ∈ S(L∞ [0, 2π]). Observe that both F (fn ) and w are continuous functions. Using (18) there is a constant M 4

SHEKHTMAN-SKRZYPEK: ...FOURIER PROJECTION...

443

such that |F (fn )(t)| < M. By (19) there is a constant K such that |gn (t)| < K for all t except finite many points. That means that gn ∈ L∞ [0, 2π] and by Lebesgue’s dominated convergence theorem: Z 2π Z 2π gn (t)h(t) dm → g(t)h(t) dm, (22) 0

0

for every h ∈ L1 [0, 2π]. Now we need to show that this g is a norming functional for F in L1 [0, 2π]. Consider F ∗∗ : L∗∗ 1 → πN . Observe that ||fn ||1 ≤ ||fn ||pn = 1.

(23)

Therefore fn ∈ L1 [0, 2π] and using canonical embedding of X in X ∗∗ we have fn ∈ L∗∗ ≤ 1. As a result fn has a weak* convergent subse1 [0, 2π] and ||fn ||L∗∗ 1 quence in L∗∗ . Passing to subsequence, if necessary, we may assume that 1 w∗

fn −→ f,

(24)

weak* in L∗∗ ≤ 1. By Lemma 2.1, since F ∗∗ (fn ), F ∗∗ (f ) ∈ πN , 1 [0, 2π] and ||f ||L∗∗ 1 we have F ∗∗ (fn ) → F ∗∗ (f ), (25) in norm topology. By (22) we have gn (h) → g(h),

(26)

for every h ∈ πN . Putting the above two facts together we get |gn (F ∗∗ fn ) − g(F ∗∗ f )| = |gn (F ∗∗ fn − F ∗∗ f ) + (gn − g)(F ∗∗ f )| ≤ ||F ∗∗ fn − F ∗∗ f || + |(gn − g)(F ∗∗ f )| → 0.

(27)

But gn (F ∗∗ fn ) = gn (F fn ) = ||F ||pn → ||F ||1 = ||F ||L∗∗ . Therefore 1 g(F ∗∗ f ) = ||F ||L∗∗ . 1

(28)

And since ||f ||L∗∗ ≤ 1 we have ||g ◦ F ∗∗ || = ||F ||L∗∗ and as a result 1 1 ||g ◦ F || = ||F ||1 ,

(29)

so g is a norming functional for F in L1 [0, 2π]. Lemma 2.3 Every norming functional g ∈ S(L∞ [0, 2π]) for the Fourier projection F in L1 [0, 2π] has to be of the form g(t) = sign DN (t + s), for some s ∈ [0, 2π] (here the addition is considered to be modulo [0, 2π]).

5

(30)

444

SHEKHTMAN-SKRZYPEK: ...FOURIER PROJECTION...

Proof. Let g ∈ S(L∞ [0, 2π]) be a norming functional for Fourier projection 1 1 F : L1 [0, 2π] → πN . Denote dt = 2m dm(t) and dx = 2m dm(x). For any t we have ¯Z 2π ¯ Z 2π ¯ ¯ ¯ g(x)DN (x − t) dx¯¯ ≤ |g(x)DN (x − t)| dx ¯ 0 0 (31) Z 2π Z 2π = |DN (x − t)| dx = |DN (x)| dx. 0

0

Since DN (x) = 1 + 2 cos(x) + ... + 2 cos(N x), the function Z 2π h(t) = g(x)DN (x − t) dx

(32)

0

is continuous on [0, 2π]. We will show that ¯Z 2π ¯ Z ¯ ¯ max ¯¯ g(x)DN (x − t) dx¯¯ = t∈[0,2π]



0

0

Assume for the contrary that there is δ > 0 ¯Z 2π ¯ µZ ¯ ¯ ¯ max ¯ g(x)DN (x − t) dx¯¯ =



t∈[0,2π]

0

|DN (x)| dx. ¶ |DN (x)| dx − δ

Z

0

µµZ







|f (t)| 0

µµZ

0



= µZ =

0 2π

(34)

0

Take any f ∈ S(L1 [0, 2π]). Using Fubini’s Theorem we would get ¯Z 2π Z 2π ¯ ¯ ¯ ¯ |g(F f )| = ¯ ( f (t)g(x)DN (x − t) dt) dx¯¯ 0 0 ¯Z 2π Z 2π ¯ ¯ ¯ ¯ =¯ ( f (t)g(x)DN (x − t) dx) dt¯¯ 0 0 ¯Z 2π ¯ Z 2π ¯ ¯ ¯ =¯ f (t) ( g(x)DN (x − t) dx) dt¯¯ 0 0 ¯Z 2π ¯ Z 2π ¯ ¯ ≤ |f (t)| ¯¯ g(x)DN (x − t) dx¯¯ dt 0

(33)

(35)

¶ ¶ |DN (x)| dx − δ dt ¶

¶ Z |DN (x)| dx − δ (



|f (t)| dt) 0

¶ |DN (x)| dx − δ = ||F ||1 − δ.

0

That implies that g is not a norming functional for F, a contrary. Therefore (33) holds and, as a result of (31), there is t0 such that ¯Z 2π ¯ Z 2π ¯ ¯ ¯ g(x)DN (x − t0 ) dx¯¯ = |g(x)DN (x − t0 )| dx, (36) ¯ 0

0

and as a result g(x) = sign DN (x − t0 ). 6

SHEKHTMAN-SKRZYPEK: ...FOURIER PROJECTION...

445

Theorem 2.4 Fix N and consider the Fourier projection F : Lp [0, 2π] → πN . Then there is an ² > 0 such that for every p ∈ (1, 1 + ²) we can find a norming functional h ∈ S(Lq [0, 2π]) for F in Lp [0, 2π] such that Z (



Z h(t) sin(kt) dt) + (



2

0

h(t) cos(kt) dt)2 6= 0, for k = 0, 1, ..., N.

(37)

0

Proof. Assume for the contrary that there is a sequence pn → 1 and a sequence of norming functionals gn ∈ S(Lqn [0, 2π]) for F in Lpn [0, 2π] such that Z

Z



gn (t) sin(kt) dt = 0



and

0

gn (t) cos(kt) dt = 0,

(38)

0

for some k ∈ {0, 1, ..., N }. Using Theorem 2.2 we get Z

Z



g(t) sin(kt) dt = 0

and

0



g(t) cos(kt) dt = 0

(39)

0

and g is a norming functional for F in L1 [0, 2π]. But Lemma 2.3 gives g(t) = sign Dn (x+s). Using Fejer formula [3], we know the first 2N +1 terms in Fourier expansion of sign Dn (t) : N X 1 2 mπ sign Dn (t) = + tan cos(mt) + ... 2N + 1 m=1 mπ 2N + 1

(40)

From the above we can easily see now that (39) cannot occur. Theorem 2.5 Fix N and consider the Fourier projection F : Lp [0, 2π] → πN . Then there is an ² > 0 such that for every p ∈ (1, 1 + ²) the Fourier projection is the unique minimal projection from Lp [0, 2π] onto πN . Proof. Take ² and functional h from Theorem 2.4. Observe that, by (8), if h(t) is a norming functional then hs (t) = h(t + s) is also a norming functional. We will show that the set of norming functionals {hs , s ∈ [0, 2π]} is total over πN . That is if w(x) ∈ πN and for every s ∈ [0, 2π] : 1 2π then w = 0. Take

Z



h(t + s)w(t) dt = 0,

(41)

Z 2π 1 h(t) dt 2π 0 Z 1 2π ak = h(t) sin(kt) dt π 0 Z 2π 1 bk = h(t) cos(kt) dt. π 0

(42)

0

a0 =

7

446

SHEKHTMAN-SKRZYPEK: ...FOURIER PROJECTION...

By Theorem 2.4 a0 6= 0 and ak , bk 6= 0 for k = 1, ..., N. It is easy to see that Z 2π 1 h(t + s) dt = a0 2π 0 Z 1 2π (43) h(t + s) sin(kt) dt = ak cos(ks) − bk sin(ks) π 0 Z 1 2π h(t + s) cos(kt) dt = ak sin(ks) + bk cos(ks). π 0 Take w = c0 + imply a0 c0 +

N X

PN k=1

(ck sin(kt) + dk cos(kt)). By (43), the equation (41) would

[ck (ak cos(ks) − bk sin(ks)) + dk (ak sin(ks) + bk cos(ks))] = 0, (44)

k=1

for every s ∈ [0, 2π]. As a result a0 c0 +

N X

[(dk ak − ck bk ) sin(ks) + (ck ak + dk bk ) cos(ks)] = 0,

(45)

k=1

for every s ∈ [0, 2π]. That is a0 c0 = 0 dk ak − ck bk = 0 ck ak + dk bk = 0.

(46)

Since a0 , ak , bk 6= 0 then c0 = ck = dk = 0 for all k = 1, .., N, as a result w = 0. Applying Theorem 1.2 we get the uniqueness of F.

References [1] D. L. Berman, On the impossibility of constructing a linear polynomial operator furnishing an approximation of the order of the best approximation, Dokl. Akad. Nauk SSSR 120 (1958), pp. 1175–1177. MR0098941 (20 #5387) [2] E. W. Cheney, C. R. Hobby, P. D. Morris, F. Schurer, and D. E. Wulbert, On the minimal property of the Fourier projection, Trans. Amer. Math. Soc. 143 (1969), pp. 249–258. MR0256044 (41 #704) [3] E. W. Hardy, Note on Lebesgue’s constants in the theory of Fourier series, J. London Math. Soc. 17 (1942), pp. 4–13. MR0006754 (4,36f) [4] Pol V. Lambert, On the minimum norm property of the Fourier projection in L1 -spaces, Bull. Soc. Math. Belg. 21 (1969), pp. 370–391. MR0273293 (42 #8173) 8

SHEKHTMAN-SKRZYPEK: ...FOURIER PROJECTION...

[5] G. Lewicki and L. Skrzypek, Chalmers-Metcalf operator and uniqueness of minimal projections, J. Approx. Theory 148 (2007), pp. 71–91. MR2356576 [6] S. M. Lozinski, On a class of linear operations, Doklady Akad. Nauk SSSR (N. S.) 61 (1948), pp. 193–196. MR0026699 (10,188c) [7] W. Odyniec and G. Lewicki, 1449, Minimal projections in Banach spaces. Springer-Verlag, Berlin, 1990, Problems of existence and uniqueness and their application. MR1079547 (92a:41021) [8] W. Ruess and C. Stegall, Extreme Points in Duals of Operator Spaces, Math. Ann. 261 (1982), pp. 535–546. MR682665 (84e:46007) [9] B. Shekhtman and L. Skrzypek, Norming points and unique minimality of orthogonal projections, Abstr. Appl. Anal. (2006), pp. Art. ID 42305, 17. MR2211664 (2006k:46043) [10] B. Shekhtman, L. Skrzypek, Uniqueness of minimal projections onto twodimensional subspaces, Studia Math. 168 (3) (2005) 273–284. [11] B. Shekhtman and L. Skrzypek, On the non-uniqueness of minimal projection in Lp spaces, J. Approx. Theory, doi:10.1016/j.jat.2008.08.006

9

447

JOURNAL 448 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 448-453, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Object Registration Using Graph Representations of Images Tamir Nave, Joseph M. Francos and Rami Hagege Electrical and Computer Engineering Department Ben-Gurion University Beer Sheva 84105, Israel ABSTRACT We consider the problem of object registration based on a set of known template images. The proposed solution employs a weighted graph representation of images, and a method that reduces the high dimensional problem of evaluating the orbit created by applying the set of all transformations in the group to a template, into a set of linear equations. The method yields a very large number of independent linear constraints that enable an explicit parametric estimation. Mathematical proof of the method is presented as well as analyzes and experiments that demonstrate its robustness. Index Terms— Image registration, Image recognition, Parameter estimation, Nonlinear estimation, Graph representation of images, Multidimensional signal processing 1. INTRODUCTION This paper is concerned with the general problem of automatic image registration based on a set of known templates. More specifically the paper presents the notion of compactly storing the topology of the image in a graph representation, and based on this representation, proposes an algorithmic solution to the registration problem. The fundamental setting of the problem and common approaches are provided in [1][4]. There are two key elements in a deformable template representation: A typical element (the template); and a family of transformations and deformations which when applied to the typical element produces other elements. The family of deformations considered in this paper is extremely wide: we consider differentiable homeomorphisms having a continuous and differentiable inverse, where the derivative of the inverse is also continuous. Thus each template is associated with its orbit, induced by the group action on the template. Hence, given measurements of an observed object (for example, in the form of an image) registration becomes the procedure of finding the group element that minimizes some metric with respect to the observation. Theoretically, in the absence of noise, the solution to the registration problem is obtained by applying each of the deformations in the group to the template, followed by comparing the result to the observed realization. However, as the

number of such possible deformations is infinite, this direct approach is computationally prohibitive. Hence, more sophisticated methods are essential. The analysis and the algorithmic solution derived in this paper enable a rigorous treatment of the homeomorphism estimation problem in a wide range of applications. The center of the proposed solution is a method that reduces the high dimensional problem of evaluating the orbit created by applying the set of all possible homeomorphic transformations in the group to the template, into a problem of analyzing a function in a low dimensional Euclidian space. In general, an explicit modeling of the homeomorphisms group is impossible. We therefore choose to solve this problem by focusing on subsets of the homeomorphisms group which are also subsets of vector spaces. This may be regarded as an approximation the homeomorphism using polynomials, based on the denseness of the polynomials in the space of continuous functions with compact support. In this setting, the problem of estimating the parametric model of the deformation is solved by a linear system of equations in the low dimensional Euclidian space. More specifically, consider the problem given by h(x1 , x2 ) = f (φ(x1 , x2 )) where φ(x1 , x2 ) = (φ1 (x1 , x2 ), φ2 (x1 , x2 )). In the problem setting considered here h and f are given while φ should be estimated. In [5],[6],[7] we analyzed this problem and derived estimation algorithms of the deformation φ, using non linear functionals employed to construct linear constraints on the parameters of the homeomorphisms. This paper presents a discrete graph representation of images and proposes a generalization of these functionals. The new functionals employ the graph representation of the image. The concept of graph representation of images stems from digital topology [9], and has been employed in a verity of applications [10], [11], [13], [14]. The common concept behind these representations is to have the topology of the image compactly packed in a graph structure. 2. GRAPH REPRESENTATION OF AN IMAGE Observe a continuous 2D image of an object and allow it to deform elastically into any shape as long as the deformation is a homeomorphism, (i.e. a continuous and invertible trans-

NAVE ET AL: OBJECT REGISTRATION...

449

formation such that its inverse is also continuous). See Figure 1.

Fig. 1. Homeomorphisms on an image.

Obviously, the key the to the solution of the any registration problem is the understanding of what are the basic invariant properties of the deformed object. Clearly, its size and contours are not; the edges, and the straight lines in it also vary due to the elastic deformation. The set of colors of the deformed image may also change due to varying illumination conditions. Yet, there is a fundamental property that remains invariant and this is the object topology. It is known that homeomorphisms preserve connectivity and the number of holes in a set. (e.g., Figures 2 , 3)

Fig. 4. The topology of a complex structure of sets is invariant to homeomorphisms. Denoting the set of all bounded and measurable functions with compact support from X ⊂ R2 into Y ⊂ R by M, the previous understanding leads us to define the following transformation from M into the set of weighted graphs: f (x, y) → < E, V > by the following rule: v ∈ V ⇐⇒ v ⊆ f −1 (i) < v1 , v2 >∈ E ⇐⇒ cl(v1 ) ∩ cl(v2 ) 6= ∅

(1)

where i ∈ Im{f } is the weight of the vertex v ,v ⊆ X is a connected set, and cl means the closure of a set. The representation is non bijective: An image is represented by one graph only, however, a single graph represents many images. The motivation to represent an image by a weighted graph is that graph representation is homeomorphism-invariant. (See, e.g. Figure 4, 5). The main disadvantage of this representation is that apart from geometric deformation, real images suffer from intensity variations that may change their graph representation. 3. PROBLEM DESCRIPTION Fig. 2. Homeomorphisms preserve the number of holes in a set, not their shape.

In this section we shall briefly set the mathematical framework we adopt in order to formalize the analysis of the deformation estimation problem. This framework enables accurate representation and analysis of our problem, leading to rigorous criteria on the existence and uniqueness of the solution, and under some mild restrictions to be explained below to the derivation of an explicit solution. We note that due to the inherent physical properties of the problem, it is natural to model and solve it in the continuous domain. Inherently, the mapping φ of R2 into itself is of a continuous nature, as is Fig. 3. The topology of any two sets is maintained, after the physical phenomenon of geometric deformation of realhomeomorphisms. life objects it represents. Thus, if we impose a discrete model (e.g., (x1 , x2 ) ∈ Z 2 ), we find that, in general, the natural φ to Now, let us treat an image as a collection of colored patches. consider is incompatible (as for “almost all” (x1 , x2 ) ∈ Z 2 , Each patch is a connected set in R2 adjacent to different patches. (φ1 (x1 , x2 ), φ2 (x1 , x2 )) ∈ / Z 2 ). Thus, the problem and its Hence, the neighboring patches of any patch remain its neighsolution are formulated in the continuous domain, while the bors following any homeomorphism. (See, e.g., Figure 4) sampling and quantization effects that accompany the digital

450

NAVE ET AL: OBJECT REGISTRATION...

of functions that don’t belong to MG , where G is the affine group, include any constant function defined on all of R2 ; any periodic function defined on all of R2 ; and functions with radial symmetry, such as a circle (as SO2 (R) ⊂ GL2 (R)). Note however that functions with compact support are not translation nor scale invariant. 3.2. Problem Statement

Fig. 5. Several instances of the same object that are distinct from each other by homeomorphisms, however they are all represented by the same graph. implementation of the method, are handled as noise contributions. 3.1. Group Theory Setting Let M denote the space of compact support, bounded, and Lebesgue measurable (or more simply, integrable) functions from R2 to R. Let x be some vector in R2 . Let G be a group representing the set of deformations the objets may undergo. G is said to act as a transformation group on M if there is a mapping G × M → M , denoted by (φ, f ) 7→ f ◦ φ = f (φ(x)) such that (f ◦ φ1 ) ◦ φ2 = f ◦(φ1 ◦φ2 ) for every φ1 , φ2 ∈ G and f ∈ M ; and if f ◦e = f for all f ∈ M , where e is the identity element of G. For a given f ∈ M , the set {f ◦ φ : φ ∈ G} is called the orbit of f . It is the entire set of possible observations on the object – the result of applying to it any of the deformations in the group. The stabilizer of the function f ∈ M with respect to the group G is the set of group elements φ ∈ G such that f ◦ φ = f , i.e., the set of group elements that map f to itself. Thus the group G naturally defines an equivalence relation on M in terms of the orbits of M induced by the action of G: Any two functions h and f are equivalent if they are on the same orbit, i.e., if there exists some φ ∈ G such that f ◦φ = h. Let MG ⊆ M be the subset of functions in M with no group symmetry, i.e., the set of functions in M whose stabilizer is trivial and includes only e, the identity element of G. Thus, MG is the subset of functions in M where uniqueness of the solution to the defined problem is guaranteed in the sense that if h, f ∈ MG such that they are on the same orbit, then there exists a single φ such that f ◦ φ = h. In [8] we show that MG is dense in M in the L2 norm. In contrast, examples

In the following we assume that G is the group of differentiable homeomorphisms such that each element of G has a continuous and differentiable inverse, where the derivative of the inverse is also continuous. The group G lies in the norm space C(X) of continuous real-valued functions of X. By the above assumption every φ−1 , (φ−1 )0 ∈ C(X). Since C(X) is a normed separable space, there exists a countable set of basis functions {ei } ⊂ C(X), such that for every φ ∈ G, (φ−1 )0 (x) =

X

bi ei (x) .

(2)

i

In other words, it is assumed that every element in the group and its derivative can be represented as a convergent series of basis functions of the separable space C(X). Our goal then, is to obtain the expansion of φ−1 (x) with respect to the basis functions {ei (x)}. In practice, the series (2) is replaced by a finite sum, i.e., we have 1 ≤ i ≤ m. Given two bounded, Lebesgue measurable functions h, g ∈ MG with compact supports, such that h(x) = f (φ(x)) , φ ∈ G, x ∈ R2

(3)

the problem is to find the deformation φ. As indicated above, the direct approach for solving the problem of finding the parameters of the unknown transformation φ ∈ G is to apply the set of all possible transformations, (i.e., every element of G), to the given template f , thus evaluating the entire orbit of f . Since h and f are homeomorphic, one of the points on the orbit represents the action of the desired group element φ. Nevertheless, since φ is modeled by m parameters, it is clear that implementation of such a search on the orbit requires a search over an m-dimensional manifold embedded in an infinite dimensional function space, which is infeasible. In this paper we show that the problem of finding the parameters of the unknown elastic transformation, whose direct solution requires a highly complex search in a function space, can be formulated as an explicit parameter estimation problem. Moreover, it is shown that the original problem can be formulated in terms of an equivalent problem which is expressed in the form of a linear system of equations. From every subgraph of the graph that represents the template we obtain a linear constraint in the unknown parameters of the transformation. A solution of this linear system of equations provides the unknown transformation parameters. In Section 4 we show how the problem of finding the parametric model

NAVE ET AL: OBJECT REGISTRATION...

of the deformation can be transformed using a set on nonlinear graph-based functionals into a set of linear equations which is then solved for the transformation parameters. However, before getting in Section 4 into the details of this new representation of the problem, we shall briefly elaborate on the mathematical construction that enables it.

4. LINEAR CONSTRAINTS FROM DEFORMED IMAGE To simplify the notation and the accompanying discussion we present the solution for the case where the observed signals are one-dimensional. The derivation for higher dimensions follows along similar lines. Consider the problem formulated in (3) and let z = φ(x). Then φ−1 (z) = x, and hence

3.3. The Mathematical Structure and the Fundamental Commutative Property Recall that MG is the space of compact support, bounded, measurable functions, with no group symmetry. Let Lφ be the mapping from MG to itself induced by the group G, such that Lφ (f (x)) = f (φ(x)) for every f ∈ MG and every φ ∈ G. Since Lφ (af1 (x) + bf2 (x)) = aLφ (f1 (x)) + bLφ (f2 (x)), we have that Lφ is a linear operator. Thus, the problem we address can be restated as follows: Given the pair Lφ (f (x)), f (x) find the linear operator Lφ . Towards this goal, let us define an operator w such that: w : MG × G∗ → MG

(4)



where G is the set of weighted graphs in which the weight of each vertex consists of two elements of Y: G∗ = {< V, E, ψ >, ψ : V → Y 2 ; ψ = (ψ 1 , ψ 2 )}

(5)

The operator w acts on an image f ∈ MG and a graph gp ∈ G∗ by finding all of the graphs gp as subgraphs of the graph that represents the image f . To each detected vertex v with an intensity ψ 1 (v) the operator assigns a new value ψ 2 (v) and eliminates the graph vertices that weren’t found. Thus a new image is formed with non zeros values only at locations that satisfy the definition of the subgraph gp . This mapping, denoted by Mgp : M → M , is defined as follows: Mgp (f (x)) = w(f (x), gp ), gp ∈ G∗ . The fundamental property being exploited in this paper in order to reduce the original high dimensional problem to an equivalent problem that is linear in the unknown transformation parameters is the commutative property of the left composition operator w and the right composition operator φ, stated explicitly in the next theorem: Theorem 1. Let f, h ∈ MG and gp ∈ G∗ . Then Lφ (f (x)) = h(x) implies that Lφ (w(f (x), gp )) = w(h(x), gp ). In a more concise form: Mgp (Lφ ) = Lφ (Mgp ) Proof. [Mgp (Lφ )](f (x)) = Mgp (f (φ(x)) = w(f (φ(x)), gp ) = Lφ (w(f (x), gp )) = [Lφ (Mgp )](f (x)) Thus, knowing how Lφ acts on some function f , we know the action of Lφ on any function w(f (x), gp ), for any gp ∈ G∗ .

451

(φ−1 )0 (z)dz = dx

(6)

∗ Let us choose any p elements {gp }P p=1 ⊆ G , and as we show next, these elements are employed to translate the identity relation (3) into a set of P equations:

Z∞

Z∞ w(h(x), gp )dx

=

−∞

w(f (φ(x)), gp )dx −∞ Z∞

(φ−1 (z))0 w(f (z), gp )dz

= −∞

=

m X i=1

Z∞ bi

ei (x)w(f (x), gp )dx

−∞

p = 1, . . . , P

(7)

Rewriting (7) in a matrix form we have  R  w(h, g1 )   ..   = . R w(h, gp ) R  R   e1 w(f, g1 ) . . . em w(f, g1 ) b1    ..  .. .. ..    .  (8) . R . . R e1 w(f, gp ) . . . em w(f, gp ) bm Based on the fact that the operator w is homeomorphism invariant, we have the following theorem: Theorem: The homeomorphism φ satisfying the parametric model defined in (2) is uniquely determined iff the matrix R  R  e1 w(f, g1 ) . . . em w(f, g1 )   .. .. .. (9)   . R . . R e1 w(f, gp ) . . . em w(f, gp ) is full rank. Thus, provided that {gp }P p=1 are chosen such that (9) is full rank, the system (8) (in the absence of noise we take P = m) can be solved for the parameter vector [b1 , . . . , bm ]. It is clear that in the absence of noise, any set of weighted graphs {gp }m p=1 such that (9) is full rank is equally optimal. 5. NUMERICAL EXAMPLES The following example illustrates the proposed solution for elastic image registration. The template was taken to be an

452

NAVE ET AL: OBJECT REGISTRATION...

RGB image of dimensions 314 × 314, and the observation is an elastic deformed version of it (Figure (6)) where the deformation function is: φ(x, y) = (−0.75x − 1.29y − 1.35x2 , 1.29x − 0.75y)

Fig. 8. The operator application on the template and observation based on all sub-graphs with two vertices.

Fig. 6. Template and elastically deformed observation. The estimation does not involve any search scheme, but only the application of the graph representation of each image and our linear model. Figure 7 depicts the operation of functionals that are based on all sub-graphs with one vertex only. In this example each vertex corresponds to a subset of the image color space that consist of 1003 colors out of the entire RGB color space of 2563 colors. The label assigned to each vertex is an indicator function of the corresponding color cube. Figure 8 depicts the operation of functionals that are based on all sub-graphs with two vertices, using indicator functions as vertices labels. Figure 9 depicts the operation of functionals that are based on another type of one-vertex sub-graph where different Gaussian functions of the observed intensities are employed as vertices’ labels. The estimated deformation obtained using the constraints derived using all these subgraphs is given by: φ(x, y) = (−0.73x−1.25y−1.3x2 , 1.29x−0.76y−0.01x2 ).

Fig. 7. The operator application to the template and observation using all sub-graphs with one vertex.

6. CONCLUSIONS In this paper we have considered the problem of finding the transformation relating a given observation on a planar ob-

Fig. 9. The operator application on the template and observation based on one vertex sub-graph with Gaussian weighted labels on its vertices.

ject with some pre-chosen template of this object. The direct approach for estimating the transformation is to apply each of the deformations in the group to the template in a search for the deformed template that matches the observation. The notion that a weighted graph represents the topology of images was presented. A method that employs a set of non-linear graph-based functionals to replace the original high dimensional problem by an equivalent linear problem, expressed in terms of the unknown transformation parameters, was derived. The resulting method is explicit and global. It deals with any elastic deformation, and the obtained map ϕ = H(h, f ) is continuous and involves only elementary linear analysis in the same dimension as that of the group model.

NAVE ET AL: OBJECT REGISTRATION...

7. REFERENCES [1] U. Grenander, General Pattern Theory, Oxford University Press, 1993. [2] L. Brown, “A Survey of Image Registration Techniques,” ACM Comput. Surv., vol. 24, no. 4, pp. 326376, 1992. [3] B. Zitova and J. Flusser, “Image Registration Methods: A Survey,” Image, Vis. Comp., vol. 21, pp. 977-1000, 2003. [4] M. Miller and L. Younes, “Group Actions, Homeomorphisms, and Matching: A General Framework,” Int. Jou. Comp. Vision, vol. 41, pp. 61-84, 2002. [5] R. Hagege and J. M. Francos, “Parametric Estimation of Two-Dimensional Affine Transformations,” Proc. Int. Conf. Acoust., Speech, Signal Processing, Montreal 2004. [6] J. M. Francos, R. Hagege and B. Friedlander, “Estimation of Multi-Dimensional Homeomorphisms for Object Recognition in Noisy Environments,” Proc. Thirty Seventh Asilomar Conference on Signals, Systems, and Computers, 2003. [7] R. Hagege and J. M. Francos, “Linear Estimation of Sequences of Multi-Dimensional Affine Transformations,” Int. Conf. Acoust., Speech, Signal Processing, Toulouse, 2006. [8] R. Hagege and J. M. Francos, “Parametric Estimation of Affine Transformations: An Exact Linear Solution”, submitted for publication. [9] U. Eckhardt and L. Latecki, ”Digital Topology”, Research Trends, Council of Scientific Information, Vilayil Gardens, Trivandrum, India, 1994. [10] P. L. Bazin, L. M. Ellingsen, and D. L. Pham , ”Digital Homeomorphisms in Deformable Registration” , Information Processing in Medical Imaging p.211-222 Volume 4584/2007 Springer Berlin / Heidelberg. [11] L. G. Nonato, A. M. da Silva Junior, J. Batista, and O. M. Bruno , ”Circulation and Topological Control in Image Segmentation” , Progress in Pattern Recognition, Image Analysis and Applications p.377-391 Volume 3773/2005 Springer Berlin / Heidelberg. [12] E. Decencire and M. Bilodeau , ”Downsampling of Binary Images Using Adaptive Crossing Numbers” , ”40 Years On: Mathematical Morphology” , Volume 30 p.279-288 Springer Netherlands.

453

[13] A. Charnoz, V. Agnus, and L. Soler , ”Portal Vein Registration for the Follow-Up of” , Medical Image Computing and Computer-Assisted Intervention - MICCAI 2004 Volume 3216/2004 p. 878-886 Springer Berlin / Heidelberg. [14] S. Todorovic and N. Ahuja , ”Region-Based Hierarchical Image Matching” , International Journal of Computer Vision , Volume 78, N0. 1, June, 2008 p. 47-66.

JOURNAL 454 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 454-469, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Trajectory Tubes of Nonlinear Differential Inclusions and State Estimation Problems Tatiana F. Filippova Department of Optimal Control Institute of Mathematics and Mechanics Russian Academy of Sciences 16 S. Kovalevskaya Str., Ekaterinburg 620219, Russia e-mail: [email protected] Abstract The paper is devoted to state estimation problems for nonlinear dynamic systems with system states being compact sets. The studies are motivated by the theory of dynamical systems with unknown but bounded uncertainty without its statistical description. The trajectory tubes of differential inclusions are introduces as the set-valued analogies of the classical isolated trajectories of uncertain dynamical systems. Applying results related to discrete-time versions of the funnel equations and techniques of ellipsoidal estimation theory developed for linear control systems we present new approaches that allow to find the outer and inner estimates for such set-valued states of the uncertain nonlinear control system. Numerical simulations are also given.

Key word: Differential inclusions; Uncertain dynamic system; State constraints; Viability theory; Trajectory tube; Funnel equations; State estimation; Ellipsoidal approach.

1

Introduction

The topics of this paper come from the theory of dynamical control systems with unknown, but bounded uncertainties (the case of the so-called ”set-membership” description of uncertainties) [3, 5, 11, 14, 15, 17, 18]. The motivations for these studies come from applied areas ranged from engineering problems in physics to economics as well as to ecological modelling. The paper presents recent results in the theory of tubes of solutions (trajectory tubes) to differential control systems modelled by nonlinear differential inclusions with uncertain parameters or functions.

1

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

455

We will start by introducing the following basic notations. Let Rn be the n–dimensional Euclidean space and x0 y be the usual inner product of x, y ∈ Rn with the prime as a transpose, with k x k = (x0 x)1/2 . Denote comp Rn to be the variety of all compact subsets A ⊆ Rn and conv Rn to be the variety of all compact convex subsets A ⊆ Rn . Let us denote the variety of all closed convex subsets A ⊆ Rn by the symbol clconv Rn . Consider the ordinary differential equation x˙ = f (t, x, u)

(1)

with function f : T × Rn × Rn → Rm measurable in t and continuous in the other variables. Here x stands for the state space vector, t stands for time (t ∈ T = [t0 , t1 ]) and u is a control or a disturbance. The variables u in (1) are assumed to be bounded u ∈ Q(t, x) (2) where Q(t, x) is a set-valued map (Q : T × Rn → compRm ) measurable in t and continuous in x. The given data allows to consider a set-valued function [ F (t, x) = { f (t, x, u) | u ∈ Q(t, x) } (3) and further on, a differential inclusion [2, 4, 8] x˙ ∈ F (t, x)

(4)

that reflects the variety of all models of type (1)-(2). Let us assume that the initial condition to the system (1) (or to the differential inclusion (4)) is unknown also but bounded x(t0 ) = x0 , x0 ∈ X 0 ∈ compRn

(5)

One of the principal points of interest of the theory of control under uncertainty conditions [14] is to study the set of all solutions x[t] = x(t, t0 , x0 ) to (1)-(5) (respectively, (4)-(5)) and furthermore the subset of those trajectories x[t] = x(t, t0 , x0 ) that satisfy both (4)-(5) and a restriction on the state vector ( the ”viability” constraint [1]) x[s] ∈ Y (s),

s ∈ [t0 , t]

(6)

where Y (·) ( Y (t) ∈ convRp ) is a convex compact valued multifunction. The viability constraint (6) may be induced by state constraints defined for a given plant model or by the so-called measurement equation [14] y(t) = G(t)x + w,

(7)

where y is the measurement, G(t) — a matrix function, w — the unknown but bounded ” noise” and w ∈ Q(t), Q(t) ∈ compRp . 2

456

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

The problem consists in describing the set X[·] = {x[·] = x(·, t0 , x0 )} of solutions to the system (4)- (5) (or to the system (4)- (6), that is the viable solution bundle or ”viability bundle”). The point of special interest is to describe the t – cross-section X[t] of this set that is actually the attainability domain of system (1), (4), (5) at the moment t. The set X[t] may be considered also as the set-valued estimate of the unknown state x(t) of the system of relations (4), (5) and (6). This estimate X[t] as a set-valued function of t ∈ [t0 , t1 ] is called the viability tube or the viable solution tube. The viability tubes were considered in some aspects in the theory of differential games [12, 13] and in nonlinear control synthesis problems [16]. The paper deals with the problems of control and state estimation for a dynamical control system described by differential inclusions with unknown but bounded initial state. The solution to the differential system is studied through the techniques of trajectory tubes with their cross-sections X(t) being the reachable sets at instant t to control system. Basing on the well-known results of ellipsoidal calculus developed for linear uncertain systems we present the modified state estimation approaches which use the special nonlinear structure of the control system and simplify calculations. Examples and numerical results related to procedures of set-valued approximations of trajectory tubes and reachable sets are also presented.

2

Preliminaries

We assume that the notions of continuity and measurability of set-valued maps are taken in the sense of [4]. Consider the differential inclusion (4), where x ∈ Rn , F is a continuous multivalued map (F : [t0 , t1 ] × Rn → convRn ) that satisfies the Lipschitz condition with constant L > 0, namely h(F (t, x), F (t, y)) ≤ L k x − y k, ∀x, y ∈ Rn where h(A, B) is the Hausdorff distance for A, B ⊆ Rn , i.e. h(A, B) = max {h+ (A, B), h− (A, B)}, with h+ (A, B), h− (A, B) being the Hausdorff semidistances between the sets A, B, h+ (A, B) = sup{d(x, B) | x ∈ A}, h− (A, B) = h+ (B, A), d(x, A) = inf {k x − y k | y ∈ A}. Assuming a set X0 ∈ compRn to be given, denote x[t] = x(t, t0 , x0 ) ( t ∈ T = [t0 , t1 ] ) to be a solution to (4) (an isolated trajectory) that starts at point x[t0 ] = x0 ∈ X0 . We take here the Caratheodory–type trajectory x[·], i.e. as an absolutely continuous function x[t] (t ∈ T ) that satisfies the inclusion d x[t] = x[t] ˙ ∈ F (t, x[t]) dt 3

(8)

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

457

for almost every t ∈ T . We require all the solutions { x[t] = x(t, t0 , x0 ) | x0 ∈ X0 } to be extendable up to the instant t1 that is possible under some additional assumptions [8]. Let Y (t) be a continuous set-valued map (Y : T → convRn ), X0 ⊆ Y (t0 ). Definition 1 [1, 15] A trajectory x[t] = x(t, t0 , x0 ) (x0 ∈ X0 , t ∈ T ) of the differential inclusion (8) is called viable on [t0 , τ ] if x[t] ∈ Y (t)

for all t ∈ [t0 , τ ].

(9)

We will assume that there exists at least one solution x∗ [t] = x∗ (t, t0 , x∗0 ) of (8) ( together with a starting point x∗ [t0 ] = x∗0 ∈ X0 ) that satisfies condition (9) with τ = t1 . Let X (·, t0 , X0 ) be the set of all solutions to the inclusion (8) that emerge from X0 ( the ”trajectory bundle”). Denote X [t] = X (t, t0 , X0 ) to be its crossection at instant t. The subset of X (·, t0 , X0 ) that consists of all solutions to (8) viable on [t0 , τ ] will be further denoted as X(·, τ, t0 , X0 ) (the ”viable trajectory bundle”) with its s – crossections as X(s, τ, t0 , X0 ), s ∈ [t0 , τ ]. We introduce symbol X[τ ] for these crossections at instant τ , namely X[τ ] = X(τ, t0 , X0 ) = X(τ, τ, t0 , X0 ) It is known that both maps X (t, t0 , X0 ), X(t, t0 , X0 ), X : T × T × compRn → compRn , X : T × T × compRn → compRn , satisfy the semigroup property: X (t, τ, X (τ, t0 , X0 )) = X (t, t0 , X0 ),

t 0 ≤ τ ≤ t ≤ t1 ,

X(t, τ, X(τ, t0 , X0 )) = X(t, t0 , X0 ),

t0 ≤ τ ≤ t ≤ t1 ,

and therefore define the generalized dynamic systems with set-valued trajectories [3, 20, 15]. The multivalued functions X [t] and X[t] ( t ∈ T ) will be referred to as the trajectory tube and viable trajectory tube (or viability tube) respectively. They may be considered as the set-valued analogies of the classical isolated trajectories constructed now under uncertainty conditions. One of the approaches that we discuss here is related to the evolution equation of the ”funnel type” that describes the dynamics of set–valued ”states”. The basic assumptions on set–valued map F (t, x) for the following results to be true may be found in [15, 9]. Let us consider the ”equation” [ lim σ −1 h( X [t + σ], (x + σF (t, x)) ) = 0, t ∈ T = [t0 , t1 ] (10) σ→+0

x∈X [t]

4

458

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

with ”initial condition” X [t0 ] = X0 .

(11)

We can observe that this equation is the formal analogy of the ordinary differential equation when mappings F (t, x) = {f (t, x)} and X [t] = {x[t]} (X0 = {x0 }) are single–valued. Theorem 1 [19] The multifunction X [t] = X (t, t0 , X0 ) is the unique set–valued solution to the evolution equation (10)-(11). Other versions of funnel equation (10) may be considered by substituting the Hausdorff distance h for a semidistance h+ [16]. The solution to the h+ versions of the evolution equation may be not unique and the ”maximal” one (with respect to inclusion) is studied. Now let us consider the analogy of the funnel equation (10)-(11) but now for the viable trajectory tubes X[t] = X(t, t0 , X0 ): [ \ lim σ −1 h( X[t + σ], (x + σF (t, x)) Y (t + σ) ) = 0, t ∈ T, (12) σ→+0

x∈X[t]

X[t0 ] = X0 .

(13)

The following result was proved in [15, 9] under assumptions of different type concerning mappings F (t, x) and Y (t). Theorem 2 [9, 15, 16] The multivalued function X[t] = X(t, t0 , X0 ) is the unique solution to the evolution equation (12)-(13).

3

Problem Statement

It should be noted that the exact description of reachable sets of a control system is a difficult problem even in the case of linear dynamics. The estimation theory and related algorithms basing on ideas of construction outer and inner set-valued estimates of reachable sets have been developed in [17, 5] for linear control systems. In this paper the modified state estimation approaches which use the special quadratic structure of nonlinearity of studied control system and use also the advantages of ellipsoidal calculus [17, 5] are presented. We develop here new ellipsoidal techniques related to constructing external and internal set-valued estimates of reachable sets and trajectory tubes of the nonlinear system. Some estimation algorithms basing on combination of discrete-time versions of evolution funnel equations and ellipsoidal calculus [17, 5] are given. Examples and numerical results related to procedures of set-valued approximations of trajectory tubes and reachable sets are also presented. The applications of the problems studied in this paper are in guaranteed state estimation for nonlinear systems with unknown but bounded errors and in nonlinear control theory.

5

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

459

The paper deals with the problems of control and state estimation for a dynamical control system x(t) ˙ = A(t)x(t) + f (x(t)) + G(t)u(t),

(14)

x ∈ Rn , t0 ≤ t ≤ T, with unknown but bounded initial condition x(t0 ) = x0 , x0 ∈ X0 , X0 ⊂ Rn ,

(15)

u(t) ∈ U, U ⊂ Rm , for a.e. t ∈ [t0 , T ].

(16)

Here matrices A(t) and G(t) (of dimensions n × n and n × m, respectively) are assumed to be continuous on t ∈ [t0 , T ], X0 and U are compact and convex. The nonlinear n-vector function f (x) in (14) is assumed to be of quadratic type f (x) = (f1 (x), . . . , fn (x)), fi (x) = x0 Bi x, i = 1, . . . , n,

(17)

where Bi is a constant n × n - matrix (i = 1, . . . , n). Consider the following differential inclusion [8] related to (14)–(16) x(t) ˙ ∈ A(t)x(t) + f (x(t)) + P (t),

for a.e. t ∈ [t0 , T ],

(18)

x(t0 ) = x0 ∈ X0 , where P (t) = G(t)U. Let absolutely continuous function x(t) = x(t, t0 , x0 ) be a solution to (18) with initial state x0 satisfying (15). The differential system (14)–(16) (or equivalently, (18)) is studied here in the framework of the theory of uncertain dynamical systems (differential inclusions) through the techniques of trajectory tubes X(·, t0 , X0 ) = {x(·) = x(·, t0 , x0 ) | x0 ∈ X0 } (19) of solutions to (14)–(16) with their t-cross-sections X(t) = X(t, t0 , X0 ) being the reachable sets at instant t for control system (14)–(16). The problem consists in describing the set X(·) = ∪x0 ∈X0 {x(·) = x(·, t0 , x0 )} of solutions to the differential inclusion (18) under constraint (6) (the viable trajectory tube). The point of special interest is to describe the t – crosssection X(t) of this map that is actually the attainability domain of this system at the instant t. Basing on results of ellipsoidal calculus ([17, 5]) developed for linear uncertain systems we present here the modified state estimation approaches which use the special structure of nonlinearity of studied control system (14)–(17) and combine advantages of estimating tools mentioned above.

6

460

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

4

External Estimates of Reachable Sets and Trajectory Tubes

We denote as B(a, r) the ball in Rn , B(a, r) = {x ∈ Rn : k x − a k ≤ r}, I is the identity n × n-matrix. Denote by E(a, Q) the ellipsoid in Rn , E(a, Q) = {x ∈ Rn : (Q−1 (x − a), (x − a)) ≤ 1} with center a ∈ Rn and symmetric positive definite n × n–matrix Q. For any n × n-matrix Q denote its track as Tr Q and its determinant as |Q|.

4.1

Outer Ellipsoidal Bounds

The approach presented here uses the techniques of ellipsoidal calculus developed for linear control systems [5, 17]. It should be noted that external ellipsoidal approximations of trajectory tubes may be chosen in various ways and several minimization criteria are well-known. We consider here the ellipsoidal techniques related to construction of external estimates with minimal volume (details of this approach and motivations for linear control systems may be found in [5, 17]). Assume here that P (t) = E(a, Q) in (18), matrices Bi (i = 1, ..., n) are symmetric and positive definite, A(t) ≡ A. We may assume that all trajectories of the system (18)-(15) belong to a bounded domain D = {x ∈ Rn :k x k≤ K} where the existence of such constant K > 0 follows from classical theorems of the theory of differential equations and differential inclusions [8]. From the structure (17) of the function f we have two auxiliary results. Their proofs are based on the algebraic properties of quadratic forms and are omitted here. Lemma 1 The following estimate is true k f (x) k≤ N,

N = K 2(

n X

λ2i )1/2 ,

i=1

where λi is the maximal eigenvalue for matrix Bi (i = 1, ..., n). Lemma 2 For all t ∈ [t0 , T ] the inclusion X(t) ⊂ X ∗ (t) holds where X ∗ (·) is a trajectory tube of the linear differential inclusion √ x˙ ∈ Ax + B(c, nN/2), x0 ∈ X0 ,

(20)

where c = {N/2, . . . , N/2} ∈ Rn . The following theorem gives the external estimate of the trajectory tube X(t) of the differential inclusion (18).

7

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

461

Theorem 3 Let X0 = B(0, r), r ≤ K and K −r 1 t∗ = min { √ ; ; T }. L 2M Then for all t ∈ [t0 , t∗ ] the following inclusion is true X(t, t0 , X0 ) ⊂ E(a+ (t), Q+ (t)), where

Ã



M = K λ + N + P, P =

n X

(21)

!1/2 a2i

p +

˜ λ,

i=1

L=



λ + 2K

à n X

!1/2 λ2i

,

i=1

˜ being the maximal eigenvalues of matrices AA0 , Bi (i = 1, ..., n) with λ, λi and λ and Q respectively, and vector function a+ (t) and matrix function Q+ (t) satisfy the equations a˙ + = Aa+ + a + c, a+ (t0 ) = 0 (22) Q˙+ = AQ+ + Q+ AT + qQ+ + q −1 Q∗ , q = {n−1 Tr((Q+ )−1 Q∗ )}1/2 , Q+ (t0 ) = Q0 = r2 I.

(23)

Here

2 ˜ + (p + 1)Q, Q ˜ = nN I, Q∗ = (p−1 + 1)Q 2 and p is the unique positive solution of the equation n X i=1

n 1 = , p + αi p(p + 1)

(24)

(25)

with αi ≥ 0 (i = 1, ..., n) being the roots of the following characteristic equation ˜ − αQ |= 0. |Q

(26)

Proof. Applying Lemmas 1-2 and the ellipsoidal techniques [5, 17] and comparing the inclusions (18) and (20) we come to the relation (21). Example 1. Consider the following control system ½ x˙ 1 = 6x1 + u1 , 0 ≤ t ≤ T, (27) x˙ 2 = x21 + x22 + u2 , X0 = B(0, 1), P (t) = B(0, 1), T = 0.15, K = 2.6.

(28)

Results of computer simulations based on the above theorem for this system are given at Fig. 1. 8

462

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

1.5

1

X(t;0,X0)

X

0

x2

0.5

0

−0.5

−1

−1.5 0

E(a+(t), Q+(t)) t

0.05

−1.5

−0.5

−1

0.5

0

1

1.5

2

x1

Figure 1: Reachable sets √ X(t, t0 , X0 ) and their outer ellipsoidal estimates E(a+ (t), Q+ (t)) (here t∗ = 2/29.2).

4.2

Upper Ellipsoidal Bounds via Funnel Equations

Let us discuss the estimation approach based on techniques of evolution funnel equations. Consider the following system x˙ = Ax + f˜(x)d, x0 ∈ X0 , t0 ≤ t ≤ T,

(29)

where x ∈ Rn , kxk ≤ K, d is a given n-vector and a scalar function f˜(x) has a form f˜(x) = x0 Bx with a symmetric and positive definite matrix B. Note that the direct application of funnel equations for finding trajectory tubes X(t) is very difficult because it takes a huge amount of computations based on grid techniques. The following theorem related to our special case of nonlinearity presents an easy computational tool to find estimates of X(t) by step-by-step procedures. For a simpler case of system nonlinearities the approach was presented in [10]. Theorem 4 Let X0 = E(a, k 2 B −1 ) with k 6= 0. Then for all σ > 0 the following inclusion holds

where

X(t0 + σ, t0 , X0 ) ⊆ E(a(σ), Q(σ)) + o(σ)B(0, 1),

(30)

a(σ) = a + σ(Aa + a0 Ba · d + k 2 d),

(31)

2

Q(σ) = k (I + σR)B

−1

0

0

(I + σR) , R = A + 2da B

and limσ→+0 σ −1 o(σ) = 0.

9

(32)

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

463

Proof. The funnel equation for (29) is [

lim σ −1 h(X(t + σ, t0 , X0 ),

σ→+0

{x + σ(Ax+

x∈X(t,t0 ,X0 )

+f˜(x)d)} = 0, t ∈ [t0 , T ], X(t0 , t0 , X0 ) = X0 .

(33)

If x0 ∈ ∂X0 where ∂X0 means the boundary of X0 , we have f˜(x0 ) = k 2 + 2a0 Bx − a0 Ba and from (33) we have also [ {(I + σA)x0 + σ f˜(x0 )d} = x0 ∈∂X0

[

{(I + σR)x0 + σ(k 2 − a0 Ba)d}. (34)

x0 ∈∂X0

Note that if the ellipsoid in (39) gives the tube estimate for the system with ∂X0 as starting set, then also for the system with X0 as starting set. Applying Theorem 1 and taking into account the equality (5) and the above remark we come to the estimate (39). We may formulate now the following scheme that gives the external estimate of trajectory tube X(t) of the system (29) with given accuracy. Algorithm 1. Subdivide the time segment [t0 , T ] into subsegments [ti , ti+1 ] where ti = t0 + ih (i = 1, . . . , m), h = (T − t0 )/m, tm = T . • Given X0 = E(a, k02 B −1 ) with k0 6= 0, define X1 = E(a1 , Q1 ) from Theorem 4 for a1 = a(σ), Q1 = Q(σ), σ = h. • Find the smallest constant k1 such that ˜ 1 = E(a1 , k12 B −1 ), E(a1 , Q1 ) ⊂ X and it is not difficult to prove that k12 is the maximal eigenvalue of the matrix B 1/2 Q1 B 1/2 . • Consider the system on the next subsegment [t1 , t2 ] with E(a1 , k12 B −1 ) as the initial ellipsoid at instant t1 . • Next steps continue iterations 1-3. At the end of the process we will get the external estimate E(a(t), Q(t)) of the tube X(t) with accuracy tending to zero when m → ∞. Consider the estimation of the viable trajectory tube X(t) of the system (29) under constraint (6). We modify Algorithm 1 taking into account the viability constraint (6) where we take Y (t) = Y = E(y0 , D). In this case from Theorems 2-4 we have the main inclusion X(t0 + σ, t0 , X0 ) ⊆ E(a(σ), Q(σ)) ∩ E(y0 , D) + 10

464

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

+ o(σ)B(0, 1),

X0 = E(a, k 2 B −1 ),

(35)

which allows to formulate the modified algorithm which is more complicated now (all notations in (35) are taken from Theorem 4). Algorithm 2. Subdivide the time segment [t0 , T ] into subsegments [ti , ti+1 ] where ti = t0 + ih (i = 1, . . . , m), h = (T − t0 )/m, tm = T . • Given X0 = E(a, k02 B −1 ) with k0 = 6 0, define X1 = E(a1 , Q1 ) from (35) (as in Algorithm 1) for a1 = a(σ), Q1 = Q(σ), σ = h. • Consider the intersection of ellipsoids X1 = E(a1 , Q1 ) and Y (t) = Y = E(y0 , D) and find the smallest (with respect to some criterion, e.g. as in [5]) ellipsoid X1∗ = E(a∗1 , Q∗1 ) such that E(a1 , Q1 ) ∩ E(y0 , D) ⊂ E(a∗1 , Q∗1 ). • Find the smallest constant k1 such that ˜ 1 = E(a∗1 , k12 B −1 ), E(a∗1 , Q∗1 ) ⊂ X k12 is the maximal eigenvalue of the matrix B 1/2 Q∗1 B 1/2 . • Consider the system on the next interval [t1 , t2 ] with E(a∗1 , k12 B −1 ) as the initial ellipsoid taken at initial instant t1 . • Next steps continue iterations 1-4. At the end of the process we will get the external estimating tube E(a∗ (t), Q∗ (t)) of the tube X(t) with accuracy tending to zero when m → ∞. Example 2. Consider the following system   x˙1 = −x1 , 2  x˙2 = 1 x2 + 3( x1 + x2 ), 2 2 4 µ ¶ 4 0 X0 = E(0, Q0 ), Q0 = . 0 1

(36)

(37)

Here d = {0, 3}, f˜(x) = x0 Bx with B = Q−1 0 , T = 0.15. Results of computer simulations based on Theorem 4 are shown at Fig. 2. Assume now that the state constraint (6) is also present for the system (36) with Y = B(0, r), r = 2.4. Applying Algorithm 2 we discover that the viability constraint becomes important in estimation only after 10th iteration so we may use there the simpler Algorithm 1. After that beginning with the 11th iteration the whole four-steps procedure works. Fig. 3-4 illustrate this estimation process.

11

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

465

2 1.5

X

0

1 x2

E(a(t), Q(t)) 0.5 0 −0.5 −1 0

X(t) 0.05

t

0.1 −3

−2

−1

1

0

2

3

x1

Figure 2: Trajectory tube X(t, t0 , X0 ) and its external ellipsoidal tube E(a(t), Q(t)).

5

Internal Ellipsoidal Estimates

Consider now the internal set-valued estimates of reachable sets X(t) of the uncertain nonlinear system (29). As in the previous section we formulate first the following auxiliary result. Theorem 5 Let X0 = E(a, k 2 B −1 ) with k 6= 0. Then for the trajectory tube X(t) of the system (29) and for all σ > 0 the following inclusion holds E(a− (σ), Q− (σ)) ⊆ X(t0 + σ) + o(σ)B(0, 1),

lim σ −1 o(σ) = 0,

σ→+0

(38)

where a− (σ) = a(σ) + σˆ a, −1/2 1/2 ˆ + 2σQ(σ)1/2 (Q(σ)−1/2 QQ(σ) ˆ Q− (σ) = Q(σ) + σ 2 Q ) Q(σ)1/2 ,

(39)

and a(σ), Q(σ) are defined in Theorem 4. Proof. The proof of this result is similar to the proof of Theorem 3 [10]. Based on this result we may formulate the following scheme that gives the internal estimate of trajectory tube X(t) of the system (29). Algorithm 3. Subdivide the time segment [t0 , T ] into subsegments [ti , ti+1 ] where ti = t0 + ih (i = 1, . . . , m), h = (T − t0 )/m, tm = T . • Given X0 = E(a, k02 B −1 ) with k0 6= 0, define X1 = E(a1 , Q1 ) from Theorem 5 for a1 = a− (σ), Q1 = Q− (σ), σ = h. 12

466

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

3 2

Y=E(y0, D)

*

2

−1

E(a10, k11B )

X0

x2

1 0 −1 −2 −3 0

*

*

E(a10, Q10)

0.1 0.2

−3

−2

−1

0

1

2

3

x1

t

Figure 3: Viable trajectory tube X(t, t0 , X0 ) and its external ellipsoidal tube E(a∗ (t), Q∗ (t)).

• Define k12 as the minimal eigenvalue of the matrix B 1/2 Q1 B 1/2 . • Consider the system on the next subsegment [t1 , t2 ] with E(a1 , k12 B −1 ) as the initial ellipsoid at instant t1 . • Repeat consequently the steps, at the end of the process we will get the internal estimate E(a(t), Q(t)) of the tube X(t) with accuracy tending to zero when m → ∞. Note that k1 defined at second step of the algorithm is the largest positive constant such that E(a1 , k12 B −1 ) ⊂ E(a1 , Q1 ). Example 3. Consider the following system ½ x˙ 1 = x1 + x21 + x22 + u1 , , 0 ≤ t ≤ T. (40) x˙ 2 = − x2 + u2 , ˆ = I. Results of Here t0 = 0, T = 0.3, h = 0.025, a0 = a ˆ = (0, 0), Q0 = Q computer simulations based on Theorem 5 are shown at Fig. 5.

6

Conclusions

The paper deals with the problems of control and state estimation for a dynamical control system described by differential inclusions with unknown but bounded initial state. The solution to the differential system is studied through the techniques of trajectory tubes with their cross-sections X(t) being the reachable sets at instant t to control system. 13

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

467

3

Y=E(y , D)

E(a

0

11

(σ), Q

11

(σ))

2

x2

1

0

−1

−2

−3 −3

E(a*11, Q*11) −2

E(a*11, k212B−1) −1

0 x1

1

2

3

Figure 4: Steps 1-4 of 11th iteration of Algorithm 2.

Basing on the results of ellipsoidal calculus developed for linear uncertain systems we present the modified state estimation approaches which use the special nonlinear structure of the control system and simplify calculations. Examples and numerical results related to procedures of set-valued approximations of trajectory tubes and reachable sets were also presented.

References [1] J.-P. Aubin, Viability theory, Birkhauser, Boston, 1991. [2] J.-P. Aubin and H. Frankowska, Set-valued analysis, Birkhauser, Boston, 1990. [3] E.A. Barbashin, On the theory of generalized dynamic systems, Uchen. Zap. Moscow Univ., Matematika, 135, 110-133 (1949). [4] C. Castaing and M. Valadier, Convex analysis and measurable multifunctions, Lect. Notes in Math. , 580, 1977. [5] F.L. Chernousko, State Estimation for Dynamic Systems, CRC Press, Boca Raton, 1994. [6] A.L. Dontchev and E.M. Farkhi, Error estimates for discretized differential inclusions, Computing, 4, 349–358 (1989). [7] A.L. Dontchev and F. Lempio, Difference methods for differential inclusions: a survey, SIAM Review , 34, 263–294 (1992).

14

468

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

2.5

E(a+12,Q+12)

2 1.5

X0

1

x2

0.5 0 −0.5 −1 −1.5 −2

E(a−12,Q−12)

−2.5 0

X(t)

0.2 t

0.4

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

3

x1

Figure 5: Trajectory tube X(t) and its external and internal ellipsoidal estimates E(a+ (t), Q+ (t)), E(a− (t), Q− (t)). [8] A.F. Filippov, Differential equations with discontinuous right-hand side, Nauka, Moscow, 1985. [9] T.F. Filippova, A note on the evolution property of the assembly of viable solution to a differential inclusion, Computers Math. Applic. , 25, 115-121 (1993). [10] T.F. Filippova and E.V. Berezina, On State Estimation Approaches for Uncertain Dynamical Systems with Quadratic Nonlinearity: Theory and Computer Simulations, Lecture Notes in Computer Science, Springer, 4818, 326-333 (2008). [11] E.K. Kostousova and A.B. Kurzhanski, Theoretical Framework and Approximation Techniques for Parallel Computation in Set-membership State Estimation, in Proc. of the Symposium on Modelling Analysis and Simulation, Lille, France, July 9-12, 1996, vol. 2, 1996, 849–854. [12] N.N. Krasovskii, The control of a dynamic system, Nauka, Moscow, 1986. [13] N.N. Krasovskii and A.I. Subbotin, Positional differential games, SpringerVerlag, 1988. [14] A.B. Kurzhanski, Control and observation under conditions of uncertainty, Nauka, Moscow, 1977. 15

FILIPPOVA: TRAJECTORY TUBES OF NONLINEAR...

[15] A.B. Kurzhanski and T.F. Filippova, On the theory of trajectory tubes — a mathematical formalism for uncertain dynamics, viability and control, in Advances in Nonlinear Dynamics and Control: a Report from Russia (A.B. Kurzhanski, ed.), Progress in Systems and Control Theory, Birkhauser, 1993, 17, 122-188. [16] A.B. Kurzhanski and O.I. Nikonov, On the control strategy synthesis problem. Evolution equations and set-valued integration, Doklady Akad. Nauk SSSR , 311, 788-793 (1990). [17] A.B. Kurzhanski and I. Valyi, Ellipsoidal Calculus for Estimation and Control, Birkhauser, Boston, 1997. [18] A.B. Kurzhanski and V.M. Veliov, Set-valued Analysis and Differential Inclusions, Progress in Systems and Control Theory, Birkhauser, Boston, 1990. [19] A.I. Panasyuk, Equations of attainable set dynamics, Part 1: Integral funnel equations, J. Optimiz. Theory Appl., 2, 349–366 (1990). [20] E. Roxin, On the generalized dynamical systems defined by contigent equations, J. of Diff. Equations , 1, 188-205 (1965). [21] E. Walter and L. Pronzato , Identification of parametric models from experimental data, Communications and Control Engineering Series, Springer, London, 1997.

16

469

JOURNAL 470 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 470-488, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Approximate formulae for fractional derivatives by means of Sinc methods Tomoaki Okayama† , Takayasu Matsuo, Masaaki Sugihara Graduate School of Information Science and Technology, The University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-8656, Japan † Tomoaki [email protected]

Abstract In this paper, two new approximate formulae for fractional derivatives are developed by means of Sinc methods. The difference of the two formulae is the variable transformations incorporated; the single exponential transformation and the double exponential transformation. We give error analysis of the formulae, and show that these formulae archive exponential convergence. Numerical examples that confirm the analysis are also given.

Keywords: fractional derivative, numerical approximation, Sinc methods

1 Introduction In the last few decades, mathematical models with fractional derivatives have been used in the fields of physics [6], engineering [16], chemistry [14], biology [9], control theory [3], and many others [1, 2, 7]. We consider two types of derivatives of order p: Riemann–Liouville type (Dap f ) and Caputo’s type (Dap f ), which are defined by (

)bpc+1 [

] Ibpc−p+1 f (t), a ( )bpc+1   d  p bpc−p+1   Da [ f ](t) = Ia f  (t), dt Dap [ f ](t) =

d dt

1

t > a,

(1)

t > a,

(2)

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

471

1 INTRODUCTION

2

respectively, where Iqa f is the Riemann–Liouville fractional integral of order q, Iqa [ f ](t) =

1 Γ(q)



t a

f (s) ds , (t − s)1−q

t > a.

(3)

In what follows, we assume p, q ∈ (0, 1). In this case, approximating fractional derivatives with high accuracy is not an easy task, because there is a weakly singular kernel called the Abel kernel in (3). Typical numerical methods for fractional derivatives in the literature are reviewed in some books cited above and some papers [4, 5]. The convergence rates of those methods are all of polynomial: O(n−γ ), where n denotes the number of evaluation of f , and γ is a positive constant. Recently, an “exponentially” converging approximate formula based on Chebyshev polynomials has been proposed by Sugiura–Hasegawa [19]. In their beautiful work, they have extended the so-called Clenshaw–Curtis rule for the definite integral ∫1 f (s) ds to the fractional derivative of Caputo’s type (2), and also pointed out that −1 the formula is also applicable to the Riemann–Liouville type (1) through the relation Dap [ f ](t) = Dap [ f ](t) + f (a)(t − a)−p /Γ(1 − p). They have shown that the formula converges uniformly on the given interval [a, b] with the exponential rate, O(e−γn ), under the assumption that f is analytic on an elliptic domain that contains the interval [a, b]. In general, however, f does not satisfy this assumption. In fact, the solution of fractional differential equations may have a singularity at the endpoint, t = a, due to the Abel kernel [8]. In such cases, their formula loses the fast convergence. On the other hand, for such singular functions, it is known in the wide range of numerical analysis that Sinc methods are quite effective (see, for example, Stenger [17]). In fact, Riley [15] employed techniques in Sinc methods to approximate integrals of the form (3), and obtained exponential convergence, O(e−γ

√ n

), despite singularities in

the kernel and the function f . This result has then been extended by Mori et al. [10] and the present authors [13], and it turned out that the convergence rate of the method can be improved to O(e−γn/ log n ). The key in this improvement is the replacement of

472

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

3

the variable transformation; the standard Single Exponential (SE) transformation employed in Riley’s method was replaced with a stronger transformation, the so-called Double Exponential (DE) transformation [11, 18]. The latter methods, i.e. the Sinc methods incorporated with the DE transformation, are called DE-Sinc methods, while the former ones are referred to SE-Sinc methods, accordingly. As a natural extension of these results, in the present paper we propose two new approximate formulae for Caputo’s fractional derivative (2); either based on the SE-Sinc and DE-Sinc methods. It is then shown theoretically and numerically that the convergence rate is O(e−γ

√ n

) in the first formula, and O(e−γn/ log n ) in the second formula.

These formulae are also applicable to the Riemann–Liouville fractional derivative (1) in the same manner as in Sugiura–Hasegawa [19]. This paper is organized as follows. The main results are stated in Section 2. In Section 3, we show numerical examples of the new formulae, and compare them with the one by Sugiura–Hasegawa. The proofs of the main theorems are given in Section 4.

2 Approximate formulae and their error analysis The main tool to derive approximate formulae is the Sinc approximation:

F(τ) ≈

N ∑

F( jh)S ( j, h)(τ),

τ ∈ R,

(4)

j=−N

where S ( j, h)(τ) is the Sinc function defined by S ( j, h)(τ) = sin{π(τ/h− j)}/{π(τ/h− j)}. The so-called Sinc quadrature rule is derived by integrating the both sides of (4): ∫

∞ −∞

F(τ) dτ ≈

N ∑ j=−N





F( jh) −∞

S ( j, h)(τ) dτ = h

N ∑

F( jh).

(5)

j=−N

Note that the variable τ in these formulae moves on the whole real line. If the function to be approximated is defined on a finite domain, variable transformation should be

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

473

2 APPROXIMATE FORMULAE AND THEIR ERROR ANALYSIS

4

employed in (4) or (5). There are two transformations, the SE transformation and the DE transformation, which are defined by (τ) b + a b−a tanh + , 2 2 2 (π ) b+a b−a t = ψDE tanh sinh(τ) + . a,b (τ) = 2 2 2

t = ψSE a,b (τ) =

Both transformations map τ ∈ R onto t ∈ (a, b). Their inverse functions are: −1 τ = {ψSE a,b } (t) = log

(t − a)

, b−t √   { (t − a) ( t − a )}2   1 1  −1  log τ = {ψDE + 1+ log . a,b } (t) = log  π b−t π b − t 

2.1 Derivation of a formula by means of the SE-Sinc methods 0 Recall that Caputo’s fractional derivative is defined by Dap [ f ](t) = I1−p a [ f ](t). Our

basic idea is to approximate the integral part (I1−p a ) based on the idea in Riley [15], and the derivative part ( dtd ) based on the idea in Stenger [17], respectively. Finally we combine them to approximate the target: Dap f . First we consider the approximation of I1−p g for a given function g. Changing the a original integral interval (a, t) to R by the variable transformation s = ψSE a,t (σ), we have ∫ I1−p a [g](t) =

∞ −∞

SE 0 g(ψSE (t − a)1−p a,t (σ)){ψa,t } (σ) dσ = SE p Γ(1 − p)(t − ψa,t (σ)) Γ(1 − p)





−∞

g(ψSE a,t (σ)) dσ (1 + e−σ )(1 + eσ )1−p

.

Note that the weakly singular integrand (the Abel kernel) is translated to a smooth function. Applying the quadrature rule (5) to the translated integral, we obtain the approximate formula for the integral part: Ia1−p [g](t) ≈ ISE N [g](t) =

N g(ψSE (t − a)1−p ∑ a,t (kh)) h . Γ(1 − p) k=−N (1 + e−kh )(1 + ekh )1−p

(6)

474

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

5

Here h is a mesh size suitably chosen depending on N, which will be described later. Next we consider the approximation of f 0 . Let us define a function Qa,b as Qa,b (t) = SE (t − a)(b − t). Putting F(τ) = f (ψSE a,b (τ))/Qa,b (ψa,b (τ)) in (4), we have

f (ψSE a,b (τ))



Qa,b (ψSE a,b (τ))

N ∑

f (ψSE a,b ( jh))

j=−N

Qa,b (ψSE a,b ( jh))

S ( j, h)(τ),

τ ∈ R,

which is equivalent to:

f (t) ≈ CSE N [ f ](t) =

N ∑

f (ψSE a,b ( jh))

j=−N

Qa,b (ψSE a,b ( jh))

−1 Qa,b (t)S ( j, h)({ψSE a,b } (t)),

t ∈ (a, b).

(7)

0 Differentiating the both sides gives an approximate formula for f 0 , i.e. f 0 ≈ {CSE N f} .

Using this and (6) with g = f 0 , we finally obtain the desired formula as follows: 0 SE 0 SE SE 0 Dap [ f ](t) = I1−p a [ f ](t) ≈ IN [ f ](t) ≈ IN [{CN f } ](t).

(8)

2.2 Derivation of a formula by means of the DE-Sinc methods We consider the use of the DE transformation instead of the SE transformation here. 1−p For the integral part I1−p g, we apply s = ψDE g is translated into a a,t (σ), then Ia

I1−p a [g](t) =

(t − a)1−p Γ(1 − p)



∞ −∞

π cosh(σ)g(ψDE a,t (σ)) dσ (1 + e−π sinh(σ) )(1 + eπ sinh(σ) )1−p

.

Applying the quadrature rule (5) to this integral, we obtain the approximate formula: Ia1−p [g](t) ≈ IDE N [g](t) =

N π cosh(kh)g(ψDE (t − a)1−p ∑ a,t (kh)) h . −π sinh(kh) Γ(1 − p) k=−N (1 + e )(1 + eπ sinh(kh) )1−p

The derivative part can be handled in the same manner. Similar to (7), using f (t) ≈ CDE N [ f ](t) =

N ∑

f (ψDE a,b ( jh))

j=−N

Qa,b (ψDE a,b ( jh))

−1 Qa,b (t)S ( j, h)({ψDE a,b } (t)),

t ∈ (a, b),

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

6

475

2 APPROXIMATE FORMULAE AND THEIR ERROR ANALYSIS

0 and differentiating both sides, we have f 0 ≈ {CDE N f } . Then we obtain the formula:

0 DE 0 DE DE 0 Dap [ f ](t) = I1−p a [ f ](t) ≈ IN [ f ](t) ≈ IN [{CN f } ](t).

(9)

2.3 Results of error analysis We here state the error analysis results of the presented approximate formulae, while their proofs are left to Section 4. Let us introduce the following function space. Definition 1. Let D be a simply-connected domain which satisfies (a, b) ⊂ D, and let α be a positive constant. Then Lα (D) denotes the family of all functions f that are analytic on D, and satisfy | f (z)| ≤ C|Qαa,b (z)| for a positive constant C and all z ∈ D. DE In the statement of theorems below, D is either ψSE a,b (Dd ) or ψa,b (Dd ), where

( { ) } arg z − a < d , ψSE (D ) = z ∈ C : d a,b b−z  √    {   (z − a) ( z − a )}2   1    1     < d arg  log . z ∈ C : 1 + ψDE (D ) = + log   d a,b    π   b−z π b − z    These are domains that are mapped by the SE or DE transformation from a strip domain

Dd = {ζ ∈ C : | Im ζ| < d},

(10)

for a positive constant d. With these notations, the approximate errors of the formula (8) and (9) are analyzed as follows. Theorem 1. Let ( f /Qa,b ) ∈ Lα (ψSE a,b (Dd )) for d with 0 < d < π. Let µ = min{1 − √ p, α}, N be a positive integer, and h be selected by h = πd/(µN). Then there exists a constant C independent of N such that √ − πdµN SE 0 ≤ CNe [{C f } ](t) max Dap [ f ](t) − ISE . N N

t∈[a, b]

(11)

476

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

7

Theorem 2. Let ( f /Qa,b ) ∈ Lα (ψDE a,b (Dd )) for d with 0 < d < π/2, and let f be analytic at b. Let µ = min{1 − p, α}, N be a positive integer with N > µ/(2d), and h be selected by h = log(2dN/µ)/N. Then there exists a constant C independent of N such that

DE 0 max Dap [ f ](t) − IDE N [{CN f } ](t) ≤ C

t∈[a, b]

N e−πdN/ log(2dN/µ) . log(2dN/µ)

The number of evaluation of f in these approximation formulae is n = 2N + 1, which means the convergence rate is O(e−γ

√ n

) in the SE-Sinc case, and O(e−γn/ log n ) in

the DE-Sinc case for some γ > 0; in both cases the errors decay exponentially. Remark 1. The assumption ( f /Qa,b ) ∈ Lα (D) may seem to be not practical since the function f must be zero at the endpoints by the condition | f (z)/Qa,b (z)| ≤ C|Qαa,b (z)|. But actually, functions in a certain wider, and reasonable space can be translated to those satisfying the assumption (see Stenger [17, § 4]).

3 Numerical examples In this section we consider two test functions, f1 (t) = t4/3 (1 − t)2 /Γ(7/3) and f2 (t) = t2 (1 − t)2 et , and their 1/2-order derivatives in Caputo’s sense on the interval (0, 1): t5/6 280t2 − 476t + 187 , Γ(11/6) 187 [ 1/2 }] √ { t 1 3 2 t 3 (8t − 4t − 22t + 31) + e erf( t) 8t(2t − 7t + 8) − 31 . D1/2 [ f ](t) = 2 0 16 Γ(3/2) D1/2 0 [ f1 ](t) =

Let πm denote an arbitrary positive number less than π. Then the function f1 satisDE fies ( f1 /Q0,1 ) ∈ L1/3 (ψSE 0,1 (Dπm )) and ( f1 /Q0,1 ) ∈ L1/3 (ψ0,1 (Dπm /2 )), and the function DE f2 satisfies ( f2 /Q0,1 ) ∈ L1 (ψSE 0,1 (Dπm )) and ( f2 /Q0,1 ) ∈ L1 (ψ0,1 (Dπm /2 )). In actual com-

putations, we set πm = 3.14, and then h can be selected according to Theorem 1 or Theorem 2. 1/2 The numerical result of D1/2 0 f1 is shown in Fig. 1, and the one of D0 f2 is shown

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

477

4 PROOFS OF THE THEOREMS IN SECTION 2

1

1

0.01

0.01

0.0001

0.0001

maximum error

maximum error

8

1e-06 1e-08 1e-10 1e-12

Chebyshev formula SE-Sinc formula DE-Sinc formula

1e-14

1e-06 1e-08 1e-10 1e-12

Chebyshev formula SE-Sinc formula DE-Sinc formula

1e-14

1e-16

1e-16 0

10

20

30

40

50

60

70

80

90 100

0

10

20

30

n=2N+1

Fig. 1. Approximation errors of D1/2 0 f1 .

40

50

60

70

80

90 100

n=2N+1

Fig. 2. Approximation errors of D1/2 0 f2 .

in Fig. 2. Both of the computation programs are written in C with double-precision floating-point arithmetic. The errors are checked on t = 0.01, 0.02, . . . , 0.99, and the maximum error of them is plotted on the graphs. There are three plot lines in both graphs; the formula by Sugiura–Hasegawa [19] (dashed line with × points), by the SESinc methods (solid line with 4 points), and by the DE-Sinc methods (solid line with  points). The convergence profiles of the Chebyshev formula are different between Fig. 1 and Fig. 2. This should be caused by the singularity of the function f1 at the endpoint, t = 0. In contrast, we can see that the results of the SE-Sinc formula and the DE-Sinc formula are consistent with Theorem 1 or Theorem 2 in both graphs.

4 Proofs of the theorems in Section 2 4.1 Proof of Theorem 1 (the SE-Sinc case) The following two theorems are critical to prove Theorem 1. Theorem 3. Let the assumptions of Theorem 1 are fulfilled. Then there exists a constant C independent of N such that √ 0 SE 0 ≤ Ce− πdµN . max I1−p [ f ](t) − I [ f ](t) a N

t∈[a, b]

478

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

9

Theorem 4 (Stenger [17, Corollary of Theorem 4.4.2]). Let the assumptions of Theorem 1 are fulfilled. Then there exists a constant C independent of N such that { √ } d ≤ CNe− πdµN . f (t) − CSE [ f ](t) sup N dt

t∈(a, b)

Using these theorems and the trivial fact supN kISE N kC([a, b]) < ∞, we get (11). In what follows, we prove Theorem 3. The next theorem is the base of the error analysis. Theorem 5 (Stenger [17, Theorem 4.2.6]). Let (FQa,b ) ∈ Lβ (ψSE a,b (Dd )) for d with √ 0 < d < π, let N be a positive integer, and h be selected by h = 2πd/(βN). Then there exists a constant C independent of N such that ∫ N √ ∑ b SE SE 0 F(s) ds − h F(ψa,b (kh)){ψa,b } (kh) ≤ Ce− 2πdβN . a k=−N Let us apply this theorem to the approximation (6). If we put F(s) = g(s)/(t − s) p in this theorem, and if g is analytic and bounded uniformly on ψSE a,t (Dd ) for all t ∈ [a, b], then (FQa,t ) ∈ L1−p (ψSE a,t (Dd )). Furthermore if we set µ = min{1 − p, α}, then (FQa,t ) ∈ SE SE Lµ (ψSE a,t (Dd )) since clearly Lν (ψa,t (Dd )) ⊆ Lρ (ψa,t (Dd )) if ν ≥ ρ. Therefore we obtain

the next result. Lemma 1. Assume that there exists a constant d with 0 < d < π such that g is analytic and bounded uniformly on ψSE a,t (Dd ) for all t ∈ [a, b]. Let µ = min{1 − p, α}, √ N be a positive integer, and h be selected by h = 2πd/(µN). Then there exists a constant C independent of N such that √ SE − 2πdµN max I1−p . a [g](t) − IN [g](t) ≤ Ce

t∈[a, b]

(12)

We can relax the condition on g using the following lemma. Lemma 2. Let g be analytic and bounded on ψSE a,b (Dd ) for d with 0 < d < π. Then g is analytic and bounded uniformly on ψSE a,t (Dd ) for all t ∈ [a, b].

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

479

4 PROOFS OF THE THEOREMS IN SECTION 2

10

Proof. We shall establish this lemma if we prove for all t ∈ [a, b] that ψSE a,t (Dd ) ⊆ SE −1 SE −1 ψSE a,b (Dd ), which is equivalent to “| Im{ψa,t } (z)| < d ⇒ | Im{ψa,b } (z)| < d” (recall

−1 that Dd is defined by (10)). Set ζ(t) = {ψSE a,t } (z) for simplicity. It is sufficient to

show that | Im ζ(t)| is a monotonically decreasing function, since from this we have | Im ζ(b)| ≤ | Im ζ(t)| < d. Let x, y ∈ R and set z = x + i y. Then Im ζ(t) is expressed as Im ζ(t) = arg

(z − a) t−z

( = arg

) ax + tx − at − x2 − y2 (t − a)y + i . (t − x)2 + y2 (t − x)2 + y2

Considering cos(Im ζ(t)) and its derivative, we have ax + tx − at − x2 − y2 cos(Im ζ(t)) = √ , (ax + tx − at − x2 − y2 )2 + (t − a)2 y2 d (t − a)((a − x)2 + y2 )y2 ≥ 0. cos(Im ζ(t)) = dt {((a − x)2 + y2 )((t − x)2 + y2 )}3/2 Thus cos(Im ζ(t)) is a monotonically increasing function. Since −π < Im ζ(t) ≤ π and cos(− Im ζ(t)) = cos(Im ζ(t)), we can see that | Im ζ(t)| is monotonically decreasing.



Therefore Lemma 1 can be rewritten as follows. Lemma 3. Let g be analytic and bounded on ψSE a,b (Dd ) for d with 0 < d < π. Let √ µ = min{1 − p, α}, N be a positive integer, and h be selected by h = 2πd/(µN). Then there exists a constant C independent of N such that (12) holds. 0 If ( f /Qa,b ) ∈ Lα (ψSE a,b (Dd )) (assumption in Theorem 1) holds, then f is analytic

and bounded on ψSE a,b (Dd− ) for any  with 0 <  < d. Choosing  = d/2 and using Lemma 3, we obtain Theorem 3.

4.2 Proof of Theorem 2 (the DE-Sinc case) Since supN kIDE N kC([a, b]) < ∞, Theorem 2 can be proved in a similar way to the SE-Sinc case, by showing the following two theorems.

480

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

11

Theorem 6. Let the assumptions of Theorem 2 are fulfilled. Then there exists a constant C independent of N such that 0 DE 0 −πdN/ log(2dN/µ) max I1−p . a [ f ](t) − IN [ f ](t) ≤ Ce

t∈[a, b]

Theorem 7. Let the assumptions of Theorem 2 are fulfilled. Then there exists a constant C independent of N such that } d { N DE f (t) − CN [ f ](t) ≤ C e−πdN/ log(2dN/µ) . sup dt log(2dN/µ) t∈(a, b) We first give the proof of Theorem 7, which is relatively short.

4.2.1 Proof of Theorem 7 (approximation error of derivatives) We easily obtain that     ∞   ∑ f (ψDE     a,b ( jh)) DE −1 f (t) − Q (t)S ( j, h)({ψ } (t))   a,b a,b     Q (ψDE ( jh))   j=−∞ a,b a,b ∑ f (ψDE } ( jh)) d { a,b DE −1 . (13) + Q (t)S ( j, h)({ψ } (t)) a,b a,b Qa,b (ψDE a,b ( jh)) dt | j|>N

d { f (t) − CDE [ f ](t)} ≤ d N dt dt

Let us examine the first term. We need the following definition for it. Definition 2. Let Dd () be defined for 0 <  < 1 by Dd () = {ζ ∈ C : | Re ζ| < 1/, | Im ζ| < d(1 − )}. Then H1 (Dd ) denotes the family of all functions F that are H analytic on Dd , and such that N1 (F, d) = lim→0 ∂D () |F(ζ)||dζ| < ∞. d

Then the next assertion holds for any conformal map ψ that satisfies ψ(R) = (a, b). Theorem 8 (Stenger [17, part of Theorem 4.4.2]). Assume the next two conditions: (A1) f (ψ(·))/Qa,b (ψ(·)) ∈ H1 (Dd ), d (Q (t)eisψ−1 (t) ) ≤ C/h with C depending only on ψ and Q . (A2) sup a,b a,b t∈(a, b),−π/h≤s≤π/h dt

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

4 PROOFS OF THE THEOREMS IN SECTION 2

12

˜ depending only on ψ, Q, d and f , such that Then there exists a constant C, d sup t∈(a, b) dt

    ∞   ∑   f (ψ( jh))   ˜ e−πd/h −1 f (t) − Q (t)S ( j, h)(ψ (t)) .   ≤C a,b     Q (ψ( jh)) h   j=−∞ a,b

We show that (A1) and (A2) are fulfilled with ψ(t) = ψDE a,b (t) under the assumption α DE that ( f /Qa,b ) ∈ Lα (ψDE a,b (Dd )). For (A1), it is sufficient to prove N1 (Qa,b (ψa,b (·)), d) is

finite, since | f (z)/Qa,b (z)| ≤ C|Qαa,b (z)| holds by the assumption (recall Definition 1). The next lemma shows the desired claim. Lemma 4 (Okayama et al. [12, Lemma 4.6]). Let α and d be positive constants. Then N1 (Qαa,b (ψDE a,b (·)), d) is finite for any d ∈ (0, π/2). Using the Leibniz rule and the following inequality: Qa,b (t) = 0 ({ψDE }−1 (t)) {ψDE } a,b a,b

(t − a)(b − t) b−a ≤ , √ { } π (t − a) 2 π(t − a)(b − t) 1 1+ log b−a π b−t

we easily show the condition (A2). Lemma 5. The condition (A2) in Theorem 8 holds with ψ(t) = ψDE a,b (t). Therefore we can use Theorem 8 to evaluate the first term in (13) as follows. Lemma 6. Let ( f /Qa,b ) ∈ Lα (ψDE a,b (Dd )) for d with 0 < d < π/2. Then there exists a constant C independent of h such that d sup t∈(a, b) dt

    DE ∞   ∑ f (ψ ( jh))   e−πd/h   a,b DE −1 f (t) − Q (t)S ( j, h)({ψ } (t)) ≤ C .   a,b a,b     Q (ψDE ( jh)) h   j=−∞ a,b a,b

There remains to evaluate the second term in (13); this is done by the next lemma. Lemma 7. Let ( f /Qa,b ) ∈ Lα (ψDE a,b (Dd )) for d with 0 < d < π/2. Then there exists

481

482

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

13

a constant C independent of h and N such that ∑ f (ψDE } { π 1 a,b ( jh)) d DE −1 ≤ C 2 Nh e− 2 α exp(Nh) . (14) Q (t)S ( j, h)({ψ } (t)) a,b a,b DE he t∈(a, b) | j|>N Qa,b (ψa,b ( jh)) dt sup

Proof. First, by the identity −1 Qa,b (t)S ( j, h)({ψDE a,b } (t)) =

hQa,b (t) 2π



π/h

DE −1

eis[{ψa,b }

(t)− jh]

ds

−π/h

and Lemma 5, it follows that for a constant C1 { } d −1 ≤ C1 /h. sup Qa,b (t)S ( j, h)({ψDE } (t)) a,b dt

(15)

t∈(a, b)

˜ Second, by the assumption ( f /Qa,b ) ∈ Lα (ψDE a,b (Dd )), there exists a constant C such that DE ˜ − a)/2}2α f (ψa,b ( jh)) ˜ α DE C{(b ˜ − a)2α e−πα sinh(| jh|) . ≤ C(b (ψ ( jh))| = ≤ C|Q a,b a,b Qa,b (ψDE cosh2α (π sinh( jh)/2) a,b ( jh)) ˜ − a)2α e 2 α , we have Furthermore using sinh(| jh|) ≥ (e| jh| − 1)/2, and putting C2 = C(b π

∑ f (ψDE ∑ π a,b ( jh)) ≤ C2 e− 2 α exp(| jh|) DE Qa,b (ψa,b ( jh)) | j|>N | j|>N ∑ π = 2C2 e− 2 α exp( jh) j>N ∞



e− 2 α exp(sh) ds { }∫ ∞ { } 2 παhe sh − π α exp(sh) ≤ 2C2 ds e 2 2 παheNh N 4C2 − π α exp(Nh) . = e 2 παheNh ≤ 2C2

π

N

Combining (15) with (16), we get (14).

(16)



Theorem 7 is then established by taking h as h = log(2dN/µ)/N in Lemma 6 and Lemma 7.

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

483

4 PROOFS OF THE THEOREMS IN SECTION 2

14

4.2.2

Proof of Theorem 6 (approximation error of integrals)

Theorem 6 can be shown in almost the same manner as the SE-Sinc case (Theorem 3). Let us start with the next theorem. Theorem 9 (Tanaka et al. [20, Theorem 3.1]). Let ( f /Qa,b ) ∈ Lβ (ψDE a,b (Dd )) for d with 0 < d < π/2. Let N be a positive integer with N > β/(4d), and let h be selected by h = log(4dN/β)/N. Then there exists a constant C independent of N such that ∫ N ∑ b DE DE 0 F(s) ds − h F(ψa,b (kh)){ψa,b } (kh) ≤ Ce−2πdN/ log(4dN/β) . a k=−N Applying this theorem to the approximation I1−p g ≈ IDE a N g, we have the next lemma. Lemma 8. Assume that there exists a constant d with 0 < d < π/2 such that g is analytic and bounded uniformly on ψDE a,t (Dd ) for all t ∈ [a, b]. Let µ = min{1 − p, α}, N be a positive integer with N > µ/(4d), and h be selected by h = log(4dN/µ)/N. Then there exists a constant C independent of N such that SE ≤ Ce−2πdN/ log(4dN/µ) . max I1−p [g](t) − I [g](t) a N

t∈[a, b]

(17)

We can relax the condition on g using the following lemma. Lemma 9. Let g be analytic and bounded on ψDE a,b (Dd ) ∪ {b} for d with 0 < d < π/2. Then g is analytic and bounded uniformly on ψDE a,t (Dd ) for all t ∈ [a, b]. Since its proof is far more complicated than the SE-Sinc case (Lemma 2), we leave it to the end of this section. If we accept this lemma, Lemma 8 can be rewritten as follows. Lemma 10. Let g be analytic and bounded on ψDE a,b (Dd )∪{b} for d with 0 < d < π/2. Let µ = min{1 − p, α}, N be a positive integer with N > µ/(4d), and h be selected by h = log(4dN/µ)/N. Then there exists a constant C independent of N such that (17) holds.

484

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

15

0 DE If ( f /Qa,b ) ∈ Lα (ψDE a,b (Dd )) holds, then f is analytic and bounded on ψa,b (Dd− ) for

any  with 0 <  < d. Choosing  = d/2 and using Lemma 10, we obtain Theorem 6. It remains to prove Lemma 9. We commence by showing the following lemma. Here ζ ∗ denotes a conjugate complex number of ζ, and sgn denotes the so-called sign function, defined by

      1 (x > 0),        sgn(x) =  0 (x = 0),           −1 (x < 0).

Lemma 11. Let us define Dd+ and Dd− as Dd+ = {ζ ∈ Dd : Im ζ > 0} and Dd− = {ζ ∈ Dd : Im ζ < 0}, where Dd is defined by (10). Let f be a continuous function that satisfies f (ζ ∗ ) = { f (ζ)}∗ on Dd . Then the next two assertions are equivalent: (a) The function f satisfies that (a1) Im{ f (ζ)} = 0 if and only if Im ζ = 0 on Dd , and (a2) there exists ζ0 ∈ Dd+ such that Im{ f (ζ0 )} > 0. (b) The function f satisfies sgn[Im{ f (ζ)}] = sgn[Im ζ] for all ζ ∈ Dd , i.e. Im{ f (ζ)} > 0 for all ζ ∈ Dd+ ,

(18)

Im{ f (ζ)} = 0 for all ζ ∈ R,

(19)

Im{ f (ζ)} < 0 for all ζ ∈ Dd− .

(20)

Proof. We show only (a) ⇒ (b) since clearly (b) ⇒ (a) holds. The second condition (19) is obvious by the assumption (a1). Suppose (18) does not hold, i.e. there exists η0 ∈ Dd+ such that Im{ f (η0 )} ≤ 0. Let C be a closed arc defined by C = {ζ = λζ0 + (1 − λ)η0 : λ ∈ [0, 1]}, where ζ0 is given in the assumption (a2). Since Im f is continuous on C , there exists ζ ∈ C such that Im{ f (ζ)} = 0 by the intermediate-value theorem. This is contradictory to the assumption (a1). Thus the first condition (18) holds.

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

485

4 PROOFS OF THE THEOREMS IN SECTION 2

16

The third condition (20) can be shown in the same manner as (18), if we find some ξ0 ∈ Dd− such that Im{ f (ξ0 )} < 0. By assumption it holds that Im{ f (ζ0 )}+Im{ f (ζ0∗ )} = Im[ f (ζ0 )+ f (ζ0∗ )] = Im[ f (ζ0 )+{ f (ζ0 )}∗ ] = Im[2 Re{ f (ζ0 )}] = 0. Thus Im{ f (ζ0∗ )} < 0, which completes the proof.



Let us define functions G1 and G2 as G1 (η) = η +



1+

η2 ,

√ 1 + η2 G2 (η) = . 1 + e−πη

We can check that G1 satisfies the assumptions of Lemma 11 and the assertion (a). Hence we have the next lemma from the assertion (b). Lemma 12. The equality sgn[Im{G1 (η)}] = sgn[Im η] holds for all η ∈ D1 . In fact it holds for all η ∈ C, but η ∈ D1 is sufficient for our purpose. Lemma 11 can not be applied directly to the function G2 since G2 (±i ) = ±∞, i.e. G2 is not continuous at η = ±i . However, G2 still satisfies Im{G2 (+i )} = +∞ > 0 and Im{G2 (−i )} = −∞ < 0. Therefore sgn[Im{G2 (η)}] = sgn[Im{η}] holds even if η = ±i . Then we have the next lemma. Lemma 13. The equality sgn[Im{G2 (η)}] = sgn[Im η] holds for all η ∈ D1 . Using the two lemmas above, we prove Lemma 9. Proof. In the same argument as in Lemma 2, we consider the function cos(Im ζ(t)), −1 where ζ(t) = {ψDE a,t } (z), and show it is a monotonically increasing function. Since

d d cos(Im(ζ(t))) = − sin(Im ζ(t)) {Im ζ(t)}, dt dt we examine the signs of sin(Im ζ(t)) and {Im ζ}0 below. Let us define a function η(t) as

486

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al. η(t) =

1 π

log

(

z−a t−z

17

) √ . Then ζ(t) = log{η(t) + 1 + η2 (t)}, and [ ] √ Im η(t) + 1 + η2 (t) , sin(Im ζ(t)) = √ η(t) + 1 + η2 (t)

because sin(arg(ξ)) = Im ξ/|ξ| for all ξ ∈ C. We set xt = Re η(t) and yt = Im η(t) here. Note η(t) ∈ D1 by definition. According to Lemma 12, it follows for all η(t) ∈ D1 that {

[

sgn Im η(t) +



]} 1+

η2 (t)

{ [ ]} = sgn Im G1 (η(t)) = sgn{yt }.

(21)

Next we examine the sign of {Im ζ}0 . The function Im ζ(t) can be written as Im ζ(t) =

[ { } { }] √ √ 1 log η(t) + 1 + η2 (t) − log η∗ (t) + 1 + {η∗ (t)}2 . 2i

By differentiating and rewriting this equation, we obtain d Im ζ(t) = dt

[ ] √ Im (t − z) 1 + η2 (t) 2 . √ (t − z) 1 + η2 (t)

From the definition of η(t), we have (t − z) = (t − a)/(1 + eπη(t) ), and then 2  √   1 + η2 (t)  d 1 + eπη(t)  .  Im ζ(t) = √ Im  dt 1 + eπη(t)  (t − a){1 + η2 (t)} Thus by applying Lemma 13, it follows for all η(t) ∈ D1 that  √     { [ ]}    1 + η2 (t)  = sgn Im G2 (−η(t)) = − sgn {yt } . sgn  Im        1 + eπη(t) 

(22)

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

REFERENCES

18

Finally, using the expressions (21) and (22), we obtain the desired conclusion:  2  }    πη(t)    d {sgn(yt )}2 1 + e   { } sgn cos(Im(ζ(t))) = sgn ≥ 0. √   √    2 (t)}  dt   2 (t − a){1 + η sgn η(t) + 1 + η (t)  {

Acknowledgement This study was supported by Global COE Program “The research and training center for new development in mathematics,” MEXT, Japan.

References [1] G. A. Anastassiou, Opial type inequalities involving Riemann–Liouville fractional derivatives of two functions with applications, Math. Comput. Modelling, 48, 344–374 (2008). [2] A. Carpinteri, F. Mainardi (eds.), Fractals and Fractional Calculus in Continuum Mechanics, Springer-Verlag, Wien, 1997. [3] S. Das, Functional Fractional Calculus for Systems Identification and Controls, Springer, Berlin, 2007. [4] K. Diethelm, An investigation of some nonclassical methods for the numerical approximation of Caputo-type fractional derivatives, Numer. Algorithms, 47, 361–390 (2008). [5] K. Diethelm, N. J. Ford, A. D. Freed, Y. Luchko, Algorithms for the fractional calculus: A selection of numerical methods, Comput. Methods Appl. Mech. Engrg., 194, 743–773 (2005). [6] R. Hilfer (ed.), Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000. [7] A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, Amsterdam, 2006.

487

488

APPROXIMATE FORMULAE FOR FRACTIONAL DERIVATIVES

Tomoaki Okayama et al.

19

[8] Ch. Lubich, Runge–Kutta theory for Volterra and Abel integral equations of the second kind, Math. Comp., 41, 87–102 (1983). [9] R. L. Magin, Fractional Calculus in Bioengineering, Begell House, Connecticut, 2006. [10] M. Mori, A. Nurmuhammad, T. Murai, Numerical solution of Volterra integral equations with weakly singular kernel based on the DE-sinc method, Japan J. Indust. Appl. Math., 25, 165–183 (2008). [11] M. Mori, M. Sugihara, The double-exponential transformation in numerical analysis, J. Comput. Appl. Math., 127, 287–296 (2001). [12] T. Okayama, T. Matsuo, M. Sugihara, Error estimates with explicit constants for Sinc approximation, Sinc quadrature and Sinc indefinite integration, Mathematical Engineering Technical Reports, 2009-01, The University of Tokyo, 2009. [13] T. Okayama, T. Matsuo, M. Sugihara, Sinc-collocation methods for weakly singular Fredholm integral equations of the second kind, Mathematical Engineering Technical Reports, 2009-02, The University of Tokyo, 2009. [14] I. Podlubny, Fractional Differential Equations, Academic Press, San Diego, 1999. [15] B. V. Riley, The numerical solution of Volterra integral equations with nonsmooth solutions based on sinc approximation, Appl. Numer. Math., 9, 249–257 (1992). [16] J. Sabatier, O. P. Agrawal, J. A. T. Machado (eds.), Advances in Fractional Calculus: Theoretical Developments and Applications in Physics and Engineering, Springer, Dordrecht, 2007. [17] F. Stenger, Numerical Methods Based on Sinc and Analytic Functions, Springer-Verlag, New York, 1993. [18] M. Sugihara, T. Matsuo, Recent developments of the Sinc numerical methods, J. Comput. Appl. Math., 164/165, 673–689 (2004). [19] H. Sugiura, T. Hasegawa, Quadrature rule for Abel’s equations: Uniformly approximating fractional derivatives, J. Comput. Appl. Math., 223, 459–468 (2009). [20] K. Tanaka, M. Sugihara, K. Murota, M. Mori, Function classes for double exponential integration formulas, Numer. Math., 111, 631–655 (2009).

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 489-503, 2010, COPYRIGHT 2010 EUDOXUS PRESS, 489LLC

Boundary type quadrature formulas over axially symmetric regions Tian-Xiao He Dept. Math & CS, Illinois Wesleyan University Bloomington, IL 61702-2900

Abstract A boundary type quadrature formula (BTQF) is an approximate integration formula with all its of evaluation points lying on the boundary of the integration domain. This type formulas are particularly useful for the cases when the values of the integrand functions and their derivatives inside the domain are not given or are not easily determined. In this paper, we will establish the BTQFs over some axially symmetric regions. We will discuss the following three questions in the construction of BTQFs: (i) What is the highest possible degree of algebraic precision of the BTQF if it exists? (ii) What is the fewest number of the evaluation points needed to construct a BTQF with the highest possible degree of algebraic precision? (iii) How to construct the BTQF with the fewest evaluation points and the highest possible degree of algebraic precision?

1

Introduction

Although numerical multivariate integration is an old subject, it has never been applied as widely as it is now. We can find its applications everywhere in math, science, and economics. A good example might be the collateralized mortgage obligation (CMO), which can be formulated as a multivariate integral over the 180dimensional unit cube ([2]). A boundary quadrature formula is an approximate integration formula with all its evaluation points lying on the boundary of the domain of integration. Such a formula may be particularly useful for the cases when the values of the integrand function and its derivatives inside the domain are not given or are not easily determined. Indeed, boundary quadrature formulas are not really new. From the viewpoint of numerical analysis, the classical Euler-Maclaurin summation formula and the Hermite two-end multiple nodes quadrature formulas may be regarded as onedimensional boundary quadrature formulas since they make use of only the integrand function values and their derivatives at the limits of integration. The earliest example of a boundary quadrature formula with some algebraic precision for multivariate integration is possibly the formula of algebraic precision (or degree) 5 for a triple integral over a cube given by Sadowsky [30] in 1940. He used 42 points 1

490

2

T.X. He

on the surface of a cube to construct the quadrature, which has been modified by the author with a quadrature of 32 points, the fewest possible boundary points (see [9] and [10]). Some 20 years later after Sadowsky’s work, Levin [26] and [27], Federenko [6], and Ionescu [21] investigated individually certain optimal boundary quadrature formulas for double integration over a square using partial derivatives at some boundary points of the region. Despite these advances, however, both the general principle and the general technique for construction remained lacking for many years. During 1978-87, based on the ideas of the dimension-reducing expansions (DRE) of multivariate integration shown in Hsu 1962 and 1963, Hsu, Wang, Zhou, Yang, and the author developed a general process for the construction of BTQFs in [17][20] and [9]-[15]. The analytic approach for constructing BTQFs is based on the dimensionreducing expansions (DRE), which reduces a higher dimensional integral to lower dimensional integrals with or without a remainder. Hence, a type of boundary quadratures can be constructed by using the expansions. The DRE without remainder is also called an exact DRE. Obviously, a DRE can be used to reduce the computation load of many very high dimensional numerical integration’s, such as the CMO problem mentioned above. Most DRE’s are based on Green’s Theorem in real or complex field. In 1963, using the theorem, Hsu [17] devised a way to construct a DRE with algebraic precision (degree of accuracy) for multivariate integrations. From 1978 to 1986, Hsu, Zhou, and the author (see [18], [19], [20], and [?]) developed a more general method to construct a DRE with algebraic precision and estimate its remainder. In 1972, with the aid of Green’s Theorem and the Schwarz function, P.J. Davis [4] gave an exact DRE for a double integral over a complex field. In 1979, also by using Green’s Theorem, Kratz [24] constructed an exact DRE for a function that satisfied a type of partial differential equations. Lastly, if we want this introduction to be complete, we must not overlook Burrows’ DRE for measurable functions. His DRE can reduce a multivariate integration into an one dimensional integral. Some important applications of DRE include the construction of BTQFs and asymptotic formulas for oscillatory integrals, for instance, the integrals on spheres, S d = {x ∈ Rd : |x| = 1} and balls, B d = {x ∈ Rd : |x| ≤ 1}, presented by Kalnins, Miller, Jr., and Tratnik [22], Lebedev and Skorokhodov [25], Mhaskar, Narcowich, and Ward [28], Xu [35], etc. In this paper, we will discuss the algebraic approach to constructing BTQFs for a multiple integral over a bounded closed region Ω in Rn , which is of the form Z w(X)f (X)dX. Ω

In this expression, w(X) and f (X) are continuous on Ω, and w(X) is the weight function. (w(X) can be 1 particularly.) We are seeking the BTQF of the integral with the form Z X X m ,··· ,m n w(X)f (X)dX ≈ ai 1 Dm1 ,··· ,mn f (Xi ), (1) Ω

0≤m1 +···+mn ≤m i∈I

where dX is the volume measure; aim1 ,··· ,mn (i ∈ I and 0 ≤ m1 + · · · + mn ≤ m) are mn 1 real or complex quadrature coefficients; Dm1 ,··· ,mn = ∂ m1 +···+mn / ∂xm 1 · · · xn ;

491

Boundary type quadrature formulas

3

and Xi = (xi,1 , xi,2 , · · · , xi,n ) (i ∈ I) are evaluation points (or nodes) of f on ∂Ω, the boundary of Ω. In particular, when m = 0 we write aim1 ,··· ,mn = ai and formula (1) can be rewritten as Z X w(X)f (X)dX ≈ ai f (Xi ). (2) Ω

i∈I

(2) is called a BTQF without derivative terms. When m 6= 0, (1) is called a BTQF with derivative terms. The corresponding error functionals of approximations (1) and (2) are defined respectively by Z E(f ) ≡ E(f ; Ω) = w(X)f (X)dX X XΩ m ,··· ,m n (3) Dm1 ,··· ,mn f (Xi ) − ai 1 0≤m1 +···+mn ≤m i∈I

and

Z E(f ) ≡ E(f ; Ω) =

w(X)f (X)dX − Ω

X

ai f (Xi ).

(4)

i∈I

Suppose that ∂Ω can be described by a system of parametric equations. In particular, the points X = (x1 , · · · , xn ) on ∂Ω satisfy the equation Φ(X) = 0,

(5)

where Φ has continuous partial derivatives. In addition, Φ(X) ≤ 0 for all points in Ω. Let S be another region in Rn , and let J : Y = JX, X ∈ Ω, be a transform from Ω to S with positive Jacobian ∂(Y ) > 0, |J| = ∂(X) X ∈ Ω. J is one-to-one and has the inverse J −1 : X = J −1 Y , Y ∈ S. Denote w1 (Y ) = w1 (JX) = w(X). Then for any continuous function g(X) Z Z w1 (Y )g(Y )dY = w1 (Y )g(Y )|J|dX. S



Denoting Yi = JXi (i ∈ I), |Ji | = |J|X=Xi , and taking f (X) = |J|g(Y ) = |J|g(JX) in equation (4), we obtain Z Z X X E(|J|g; Ω) = w(X)|J|g(Y )dX − ai |Ji |g(Yi ) = w1 (Y )g(Y )dY − bi g(Yi ), Ω

i∈I

S

i∈I

where bi = ai |Ji | (i ∈ I). Obviously, if Y, the boundary points of S, satisfy Φ1 (Y ) = Φ1 (JX) = Φ(X) = 0, then J maps the boundary evaluation points Xi (i ∈ I) on Ω onto the boundary evaluation points Yi = JXi on S. Consequently, we have the following result.

492

4

T.X. He

Theorem 1 Let the error functional of the quadrature formula Z X w1 (Y )g(Y )dY ≈ bi g(Yi ) S

(6)

i∈I

R P be E(g; S) = S w1 (Y )g(Y )dY − i∈I bi g(Yi ). Then E(g; S) = E(|J|g; Ω). In particular, if |J| is a constant, then E(g; S) = |J|E(g; Ω). In this case, E(g; Ω) = 0 implies E(g; S) = 0. In addition, if the boundary of S is defined by Φ1 (Y ) = Φ1 (JX) = Φ(X) = 0 and Φ(X) = 0 defines the boundary of Ω, then quadrature formula (6) is also a BTQF. In this paper, we will establish the BTQFs over some axially symmetric regions or fully symmetric regions (see the definitions below). Theorem 1 tells us that we can construct the BTQFs over many more regions from the obtained BTQFs over the special regions by using certain transforms. In addition, if the transform is linear, then the new BTQF is of the same algebraic precision degree as the old BTQF.

2

BTQFs without derivatives

Three questions arise during the construction of BTQFs (1): (i) What is the highest possible degree of algebraic precision of the BTQF if it exists? (ii) What is the fewest number of the evaluation points needed to construct a BTQF with the highest possible degree of algebraic precision? (iii) How to construct the BTQF with the fewest evaluation points with the fewest evaluation points and the highest possible degree of algebraic precision? We now answer the first question. In most cases, BTQF (1) has an inherent highest degree of algebraic precision. For instance, if Φ(X) is a polynomial of degree m, then the highest possible degree of algebraic precision of the BTQF without derivative terms (i.e., formula (2)) cannot exceed m − 1 because the summation on the right-hand side of (2) becomes zero and the integral value on the left-hand side is negative when f = Φ. Hence, when the boundary function Φ is a polynomial of a low degree, to raise the degrees of algebraic precision of the quadrature formulas, we must construct BTQFs with derivative terms (i.e., formula (1) with m 6= 0). In the following, we are going to find the solutions to questions (ii) and (iii). To simplify our discussion, we limit the region in question, Ω, to be axially symmetric or fully symmetric. An axially symmetric region is a region that for any point X = (x1 , · · · , xn ) in it, must contain all points with the form (±x1 , · · · , ±xn ). The set of axially symmetric points associated with X forms a reflection group. If a region containing a point X = (x1 , · · · , xn ) also contains all points (±a1 , · · · , ±an ), where (a1 , · · · , an ) is a permutation of (x1 , · · · , xn ), then the region is called a fully symmetric region. Throughout, we will denote all fully symmetric points, (±a1 , · · · , ±an ), associated with X by XF S and call X the generator of the fully symmetric point set. The cardinal number of the set of fully symmetric points

493

Boundary type quadrature formulas

5

associated with a generator X ∈ Rn is 2n (n!). Obviously, a fully symmetric region is an axially symmetric region, but the converse is not true. A quadrature formula is called a fully symmetric quadrature formula if the quadrature sum can be divided into several subsums such that in each of the subsums, the evaluation points are fully symmetric and the corresponding quadrature coefficients are the same. In addition, if the fully symmetric evaluation points are on the boundary of the integral region, then the corresponding quadrature formula is called a fully symmetric BTQF. Denote a monomial in terms of X by X α (α ∈ Zn0 ), which can be written in the αn α 1 form X α = xα 1 , · · · , xn , where (α1 , · · · , αn ) is called the exponent of X . From the definition of the fully symmetric region, we immediately have the following results. Theorem 2 The value of a multiple integral of a monomial X α over an axially symmetric region is zero if α contains an odd component. The value of a multiple integral of X α over a fully symmetric region depends on α, but is independent of the order of αi (i = 1, · · · , n). Theorem 3 Denote by πrn (X) the set of all polynomials of degree no greater than r. Let Ω be a fully symmetric region, Z X f (X)dX ≈ ai f (Xi ) (7) Ω

i∈I

be a fully symmetric BTQF, and E : f → R be the error operator defined by Z X E(f ) ≡ E(f ; Ω) = f (X)dX − ai f (Xi ). Ω

i∈I

n (The above expression is a special form of (4) with w(X) = 1.) Then π2k+1 ⊂ N (E), the null space of E, if and only if 2kn 1 x2k ∈ N (E) 1 · · · xn

0 ≤ k1 ≤ · · · ≤ kn , k1 + · · · + kn ≤ k.

(8)

Theorem 3 can be considered as the general principle for constructing fully symmetric BTQFs. First, we set one or more sets of fully symmetric evaluation points, with possibly some unknown points {Xi }, on the boundary ∂Ω and assume the quadrature coefficients ai corresponding to each set to be the same. Then 2kn 1 (0 ≤ k1 ≤ · · · ≤ kn and k1 + · · · + kn ≤ k) substituting all f (X) = x2k 1 · · · xn into E(f ) = 0, we obtain a system about Xi and ai . Finally, we solve the system for Xi and ai and a quadrature formula is constructed. However, a fully symmetric quadrature formula usually has too many evaluation points. (Remember that for a point X ∈ Rn there are, in general, 2n (n!) fully symmetric points.) In order to reduce the number of evaluation points in the quadrature formula, we can use an alternative form of Theorem 3 to construct a different type of symmetric quadrature formulas. We will use the following example to illustrate the idea. Example 1. Consider a triple integral over the region C3 = [−1, 1]3 . Obviously, the inherent highest degree of algebraic precision of the BTQF

494

6

T.X. He

is 5. To construct a fully symmetric BTQF, we make use of the following fully symmetric evaluation points. (1, 0, 0)F S ,

(1, 1, 0)F S ,

and(1, x0 , x0 )F S ,

where x0 (0 < x0 < 1) is undetermined. The three sets of fully symmetric points contain a total of 42 points (6, 12, and 24 points for the first, second, and third set respectively). Let the respective quadrature coefficients for each set of fully symmetric points be L, M , and N , all of which can be found using the general principle for constructing fully symmetric BTQFs. Substitute f = 1, x2 , x4 , and x2 y 2 into Z X X X f (x, y, z)dxdydz = a1 f6 + a2 f12 + a3 f24 , C3

P P P where f6 , f12 , and f24 are the sums of the function values of f over the first, second, and third set of symmetric points, respectively. Solving the above system yields r 5 364 160 64 , a1 = , a2 = − , a3 = , x0 = 8 225 225 225 giving the following BTQF of algebraic precision order 5. Z i X X 4 h X f (x, y, z)dxdydz ≈ 91 f6 − 40 f12 + 16 f24 . 225 C3

(9)

Quadrature formula (9), given by Sadowsky [30], uses too many evaluation points. Carefully considering Theorem 3, we find that the principle of constructing fully symmetric BTQFs shown in the theorem can be used to construct some “partial” symmetric BTQFs with fewer evaluation points. A set of points Xi ∈ Rn (i ∈ I) is called a symmetric point set of degree k if it possesses P the following two properties. (a) i∈I f (Xi ) = 0 for all f (X) = X α , where α contains an odd component. P 2kn 1 (b) i∈I f (Xi ) are the same for all f (X) = x2k 1 · · · xn , 2(k1 + · · · + kn ) = r. Here, r ≤ k. Obviously, a set of fully symmetric points must be a set of symmetric points of any degree, but the converse is not true. For instance, a symmetric point set of degree 5 may not be a fully symmetric point set. We now list all symmetric point sets of degree 5 on the boundary of C3 as follows. I = {(±1, ±x0 , 0), (±x0 , 0, ±1), (0, ±1, ±x0 ), 0 < x0 < 1}, II = {(±y0 , ±1, 0), (±1, 0, ±y0 ), (0, ±y0 , ±1), 0 < y0 < 1}, III = {(±1, ±1, 0), (±1, 0, ±1), (0, ±1, ±1)}, IV = {(±1, ±1, ±1)}, V = {(1, 0, 0)F S }, V I = {(1, x1 , x2 )F S , 0 < x1 , x2 < 1}, V II = {(1, 1, x3 )F S , 0 < x3 < 1}, where sets V , V I, and V II are fully symmetric, but others are not. If a BTQF constructed by using symmetric point set of degree k satisfies condition (8), then it is called a symmetric BTQF of degree k. Example 2. As an example, we now use the sets I, III, and IV to construct a symmetric BTQF of degree 5 with 32 evaluation points over C3 . Denote the quadrature coefficients corresponding to I, III, and IV as a1 , a2 , and a3 respectively.

495

Boundary type quadrature formulas

7

Following the procedure shown in Example 1, we obtain a symmetric BTQF of degree 5 as follows Z f (x, y, z)dxdydz C3

i X X 1 h X 80 f12 (I) − 52 f12 (III) + 21 f8 (IV ) , (10) ≈ 63 P P P f12 (I), f12 (III), and f8 (IV ) are the sums of the function values of f over the symmetric point sets I, III, and IV , respectively; the numbers in the subindices are the cardinal numbers of the corresponding set. Similarly, we can use sets II, III, and IV to construct another symmetric BTQF of degree 5. Z f (x, y, z)dxdydz C3



i X X 1 h X f12 (II) − 52 f12 (III) + 21 f8 (IV ) , 80 63

(11)

q 3 where y0 = 10 in set II. Quadratures (10) and (11) can be considered as two special cases of the following symmetric BTQF of degree 5, which is constructed by using I, II, and IV . Z 4(1 + y02 ) X f (x, y, z)dxdydz ≈ f12 (I) 9(y02 − x20 ) C3 4(1 + x20 ) X 1X + f12 (II) + f8 (IV ), (12) 2 2 9(x0 − y0 ) 3 where

sr s 3 13 8 − 5y02 ≤ y ≤ 1, y0 6= − 1, and x0 = . 10 5 5(1 + y02 ) q 3 we obtain formulas (10) and (11), respectively. When y0 = 1 and y0 = 10 It can be proved that the minimum number of evaluation points of symmetric BTQFs is 32. Since the quadrature formula is symmetric, on each boundary plane we must have the same number of evaluation points. Let the number of evaluation points on each boundary plane be k = 2 (Obviously, k cannot be 1). The symmetric point set has to be I or II. It is easy to check that the sets cannot yield a symmetric BTQF of degree 5. Similarly, for the cases of k = 3, · · · , 9, no matter which symmetric point sets are chosen from {I, · · · , V II}, we find that there does not exist any symmetric BTQFs of degree 5 with evaluation points less than 32. For k ≥ 10, every symmetric BTQF of degree 5, if it exists, must have more than 32 evaluation points. Thus, we obtain the following proposition. r

Proposition 4 There exist infinitely many symmetric BTQFs of degree 5 with 32 evaluation points. In addition, the number of evaluation points of a symmetric BTQFs of degree 5 can not be less than 32.

496

8

T.X. He

For BTQFS of degree 3, the minimum number of the evaluation points is reduced to 6. As an example, we give the following formula. Z

4 [f (1, 0, 0) + f (−1, 0, 0) 3 C3 +f (0, 1, 0) + f (0, −1, 0) + f (0, 0, 1) + f (0, 0, −1)] . f (x, y, z)dxdydz ≈

Example 3. We will use a double layered spherical shell as an example to demonstrate the techniques of regrouping evaluation points to obtain the symmetric BTQF with the fewest evaluation points. A double layered spherical shell in Rn , denoted by Shn , is defined by Shn = {X ∈ Rn : a2 ≤ |X| ≤ b2 }. It is easy to find that the largest degree of algebraic precision of BTQFs over Shn without derivatives is 3. We choose the following point sets as evaluation points: V III = {(±b, 0, · · · , 0), (0, ±b, 0, · · · , 0), · · · , (0, · · · , 0, ±b, 0)}, IX = {(0, · · · , 0, ±b)}, X = {(0, · · · , 0, ±a)}. Obviously, these sets are neither fully symmetric point sets nor symmetric point sets of degree 3, but by using these sets, we can construct a BTQF of degree 3 over Shn with the fewest evaluation points. Denote the quadrature coefficients corresponding to V III, IX, and X by a1 , a2 , and a3 , respectively. The BTQF generated, Z X X X f (X)dX ≈ a1 f2(n−1) (V III) + a2 f2 (IX) + a3 f2 (X), (13) Shn

is of algebraic precision of degree 3 if it holds exactly for f = 1, x21 , and x2n ; i.e., coefficients ai (i = 1, 2, 3) have to be  a1 = α(b2 − a2 ) bn+2 − an+2  a2 = α bn+4 + (n + 1)an+2 b2 − 3bn+2 a2 − (n − 1)an+4  a3 = αb2 2bn+2 − (n + 2)an b2 + nan+2 , where α=

2b2 Γ

n 2

π n/2  . + 1 (n + 2)(b2 − a2 )

When n = 2 and 3, formula (13) gives BTQFs over a ring domain and a 3dimensional double layered spherical shell respectively as follows. Z f (x, y)dxdy Sh2



π(b2 − a2 )  2 (b + a2 )[f (b, 0) + f (−b, 0)] + 2b2 [f (0, a) + f (0, −a)] 8b2 +(b2 − a2 )[f (0, b) + f (0, −b)]

497

Boundary type quadrature formulas

9

Z f (x, y, z)dxdydz Sh3



 2 2π (b − a2 )(b5 − a5 )[f (b, 0, 0) + f (−b, 0, 0) + f (0, b, 0) 2 −a )

15b2 (b2

+f (0, −b, 0)] + b2 (2b5 − 5a3 b2 + 3a5 )[f (0, 0, a) + f (0, 0, −a)] +(b7 − 3a2 b5 + 4a5 b2 − 2a7 )[f (0, 0, b) + f (0, 0, −b)] . Taking the limit a → 0, from quadrature formula (13) we obtain the following quadrature formula over the sphere S3 , which has the algebraic precision of degree 3. Z π n/2 bn  f (x, y, z)dxdydz ≈ 2(n + 2)Γ n2 + 1 S3 X  X × f2(n−1) (V III) + f2 (IX) + 4f (0, · · · , 0) . We now prove that BTQF (13) is a formula with the fewest evaluation points. Theorem 5 The minimum number of evaluation points of BTQFs over an ndimensional double layered spherical shell Shn is 2(n + 1). In particular, the minimum number of evaluation points for BTQFs over a ring domain and a 3dimensional double layered spherical shell are respectively 6 and 8. Proof. For a BTQF over Shn with precision degree 3, we will first prove that the minimum number of evaluation points on the outside layer of Shn cannot be less than 2n. Without a loss of generality, we assume that the number of evaluation points on the outside layer is 2n − 1. (The cases when the minimums are less than 2n − 1 can be proved similarly.) We will see that a contradiction from this assumption. If the assumption is valid, we take the limit a → 0 to the BTQF and obtain a quadrature formula over an n-dimensional sphere with 2n − 1 evaluation points as follows. Z 2n−1 X f (X)dX ≈ a0 f (0, · · · , 0) + ai f (Xi ), (14) Sn

i=1

where Xi (i = 1, · · · , 2n − 1) lie on the sphere surface and ai 6= 0 (i = 1, · · · , n). We will prove it cannot be of algebraic precision degree 3. Let us consider the following 2n complex vectors AX1 , · · · , AXn , Ax21 , AX22 , · · · , AXn2 , where

√ √ √ AXi = ( a1 x1,i , a2 x2,i , · · · , a2n−1 x2n−1,i )

and

√ √ √ AXi2 = ( a1 x21,i , a2 x22,i , · · · , a2n−1 x22n−1,i ).

(15)

Assume that there exist constants bi (i = 1, · · · , 2n) such that b1 AX1 + · · · + bn AXn +bn+1 Ax21 + bn+2 AX22 + · · · + b2n AXn2 = 0.

(16)

498

10

T.X. He

Taking dot product with AXi (i = 1, · · · , n) on both sides of (16) and noting that the quadrature sums in (14) are vanishing for all f = X α if α has an odd component and |α| ≤ 3, we obtain bi AXi · AXi = bi

2n−1 X

ai x2i = 0, i = 1, · · · , n.

i=1

Since the sums in the above equation are the quadrature sums of BTQF (14) for f (X) = X α with α = 2ei ({e1 , e2 , · · · , en } being the standard basis of Rn ), which are not be zero, we obtain bi = 0 for all i = 1, · · · , n. Consequently, equation (16) is reduced to bn+1 Ax21 + bn+2 AX22 + · · · + b2n AXn2 = 0. (17) √ √ Taking the dot product with A = ( a1 , · · · , a2n−1 ) on both sides of equation (17) and noting that the quadrature sums in (14) are vanishing for all f = X α if α = 3, we obtain n X p bn+i AXi k2`2 = 0. k i=1

Hence, p p bn+1 AX1 + · · · + b2n AXn = 0. Similarly, we have bn+i = 0 for all n = 1, · · · , n. Thus, vectors (15) are linearly independent, but this is impossible because all of them have 2n − 1 components. This contradiction means that the number of evaluation points on the outside layer for any BTQFs over Shn with algebraic precision degree 3 must be more than 2n−1. We now prove that the number of evaluation points on the inside layer for any BTQFs over Shn with precision degree 3 cannot be less than 2. Otherwise, if there is none or there is only one evaluation point, X0 = (x0,1 , x0,2 , · · · , x0,n ), on the inside layer of Shn , then a BTQF over Sh Pnn with algebraic precision degree 3 is not exact for quadratic polynomial f (X) = i=1 x2i − b2 or for a cubic polynomial ! n X 2 2 f (X) = xi − b (xj − x0,j ) , i=1

where x0,j 6= 0. This completes the proof of theorem.

2

A similar argument of the proof of Theorem 5 can be applied to solve other minimum evaluation point problem. For instance, we have the following result. Theorem 6 The minimum number of the evaluation points needed for constructing a quadrature formula over an axially symmetric region in Rn with algebraic precision degree 3 is 2n. The construction of a quadrature formula of this type can be found in Section 3.9 of Stroud [33]. The minimum number of the evaluation points needed for constructing a quadrature formula over an axially symmetric region in Rn with certain algebraic precision degree is topologically invariant under a reflection group action.

499

Boundary type quadrature formulas

3

11

BTQFs with derivatives

To improve the algebraic precision degrees of BTQR’s, we use the derivatives of the integrands. As examples, we will construct symmetric quadrature formulas over the surfaces of the regions C2 = [−1, 1]2 , C3 = [−1, 1]3 , and the n-dimensional sphere Sn . Example 4. Denote the sets of fully symmetric points XI = {(1, 1)F S } and XII = {(1, 0)F S }. We construct a symmetric BTQF with precision degree 5 over C2 = [−1, 1] as follows. Z X X f (x, y)dxdy ≈ a1 f4 (XI) + a2 f4 (XII) C2

+a3 [fx0 (1, 1) − fx0 (−1, −1) + fx0 (1, −1) − fx0 (−1, 1) +fy0 (1, 1) − fy0 (−1, −1) + fy0 (−1, 1) − fy0 (1, −1)] +a4 [fx0 (1, 0) − fx0 (−1, 0) + fy0 (0, 1) − fy0 (0, −1)]. Obviously, the above quadrature formula is of precision degree 5 if it is exact for f (x, y) = 1, x2 , x4 , and x2 y 2 . Therefore, we obtain 16 2 2 1 , a2 = , a3 = , a4 = − . 15 15 45 9 We use the following numerical example to show the good accuracy of the above 2 2 BTQF. Considering function f (x, y) = e−x −y and applying the last quadrature to the integral of f (x, y) over [0, 2]2 , we obtain Z Z 2 2 1 f (x, y)dxdy = e−((x+1) +(y+1) )/4 dxdy 4 C2 [0,2]2   4  −5/4 1 −2 e + 2e−1 + 1 + ≈ − 2e + 2e−1/4 60 15  1 −5/4 1 −2 −1 2e + 2e + e = 0.5576, − 90 9 while the actual integral value is 0.5577. Similarly, we can construct a BTQF over C3 = [−1, 1]3 with precision degree 7 and 50 fully symmetric evaluation points XIII = {(1, 1, 1)F S }, XIV = {(1, 0, 0)F S }, XV = {(1, 21 , 0)F S }, and XV I = {(1, 1, 0)F S } as follows. Z X X f (x, y, z)dxdydz ≈ a1 f8 (XIII) + a2 f6 (XIV ) a1 = −

C3

+a3

X

f24 (XV ) + a4

16 where a1 = 51 , a2 = − 105 , a3 =

M1

X

512 945 ,

f12 (XV I) + a5 M1 + a6 M2 + a7 M3 ,

64 11 16 a4 = − 135 , a5 = − 405 , a6 = − 81 , a7 =

= fx0 (1, 1, 1) − fx0 (−1, −1, −1) + fx0 (1, 1, −1) − fx0 (−1, −1, 1) +fx0 (1, −1, −1) − fx0 (−1, 1, 1) + fx0 (1, −1, 1) − fx0 (−1, 1, −1) +fy0 (−1, 1, −1) − fy0 (1, −1, 1) + fy0 (−1, 1, 1) − fy0 (1, −1, −1) +fy0 (1, 1, −1) − fy0 (−1, −1, 1) + fy0 (1, 1, 1) − fy0 (−1, −1, −1) +fz0 (−1, 1, 1) − fz0 (1, −1, −1) + fz0 (1, −1, 1) − fz0 (−1, 1, −1) +fz0 (1, 1, 1) − fz0 (−1, −1, −1) + fz0 (−1, −1, 1) − fz0 (1, 1, −1),

172 2835 ,

500

12

M2

T.X. He = fx0 (1, 0, 0) − fx0 (−1, 0, 0) + fy0 (0, 1, 0) − fy0 (0, −1, 0) + fz0 (0, 0, 1) − fz0 (0, 0, −1),

and M3

= fx0 (1, 1, 0) − fx0 (−1, −1, 0) + fx0 (1, −1, 0) − fx0 (−1, 1, 0) +fx0 (1, 0, 1) − fx0 (−1, 0, −1) + fx0 (1, 0, −1) − fx0 (−1, 0, 1) +fy0 (0, 1, 1) − fy0 (0, −1, −1) + fy0 (0, 1, −1) − fy0 (0, −1, 1) +fy0 (1, 1, 0) − fy0 (−1, −1, 0) + fy0 (−1, 1, 0) − fy0 (1, −1, 0) +fz0 (1, 0, 1) − fz0 (−1, 0, −1) + fz0 (−1, 0, 1) − fz0 (1, 0, −1) +fz0 (0, 1, 1) − fz0 (0, −1, −1) + fz0 (0, −1, 1) − fz0 (0, 1, −1).

Example 5. Choose 2n fully symmetric evaluation points XV II = {(r, 0, · · · , 0)F S }. Pn We can obtain a BTQF over Sn ( i=1 x2i ≤ r2 ) with the precision degree 3 as follows.  Z n+2X π n/2 rn+1  f (X)dX ≈ f2n (XV II) r 2n(n + 2)Γ n2 + 1 Sn  −fx0 1 (r, 0, · · · , 0) + fx0 1 (−r, 0, · · · , 0) − · · · −fx0 n (0, · · · , 0, r) + fx0 n (0, · · · , 0, −r) . At the end of this section, we discuss the construction of the numerical quadrature formulas over S¯n = {X ∈ Rn |X| = 1} using some recent results in [35], where S¯n is the surface of the unit sphere Bn = Bn (1) = {X ∈ Rn |X| ≤ 1} in Rn . Let H be a function defined on Rn that is symmetric with respect to xn ; i.e., H(X, xn ) = H(X, −xn ), X ∈ Rn−1 . Then for any continuous function f defined on S¯n , Z f (Y )H(Y )dµn ¯ ZSn h  p   i p = f X, 1 − |X|2 + f X, − 1 − |X|2 Bn−1

×H(X,

p dX 1 − |X|2 ) p , 1 − |X|2

(18)

¯ where Y ∈ S¯n , X ∈ Rn−1R, −1 ≤ t ≤ 1, and dω  n is the surface measure on Sn . n n/2 ¯ The volume of Sn is ωn = S¯n dµn = 2π /Γ 2 . Formula (18), shown in Xu [35], can be√proved straightforwardly by substituting dµn = (1 − t2 )(n−3)/2 dtdµn−1 and Y = ( 1 − t2 X, t) into the left-hand integral of the equation. (18) changes a boundary integral into an integral over the interior of the boundary. Hence it can be used to derive a BTQF over Bn from a quadrature formula of an integral over Bn−1 . Following [35], suppose that there is a quadrature formula of precision degree m on Bn−1 Z



g(X)H X, Bn−1

p

1−

|X|2

dX

 p

1 − |X|2



N X

ai g(Xi );

i=1

n−1 that is, the quadrature formula is exact for all polynomials in πm , which denotes n−1 the set of all polynomials defined in R with a total degree not more than m.

501

Boundary type quadrature formulas

13

Then there is a quadrature formula of homogeneous precision degree m on S¯n : Z f (Y )H(Y )dµn ¯n S



N X

  i h  p p ai f Xi , 1 − |Xi |2 + f Xi , − 1 − |Xi |2 .

(19)

i=1

Recently, Mhaskar, Narcowich, and Ward (see[28]) developed a new method for obtaining quadrature formulas on S¯n , which can be applied to the right-hand integrals of equation (20) in the following theorem, so that the BTQFs over Bn can be constructed. Theorem 7 Suppose that F (X) is a continuous function defined on the sphere Bn (x21 + · · · + x2n ≤ 1) that has 2m order continuous partial derivative with respect to xn . Then there exists the following expansion that has m terms and possesses degree 2m − 1 of algebraic precision. Z F (X)dV = Bn

m−1 X k=0

(−1)k m!

Z Lk (F (X), Um (X)) dS + ρm ,

(20)

Sn−1

where Lk (·, ·) is defined by  Lk (F, G) ≡

∂kF ∂xkn



∂ m−k−1 G ∂xnm−k−1



∂xn ∂ν



and ρm has estimate

2m n

∂ F π 2 · m!

 |ρm | ≤ n

Γ m + 2 + 1 (2m)! ∂x2m n C or n

|ρm | ≤

π 2 · m!  Γ m + n2 + 1 (2m)!

! 12

∂ 2m F

∂x2m . n L2

(21)

(22)

Formula (20) can be proved using the Green’s formula successively, and it is omitted here.

References [1] B.L. Burrows, A new approach to numerical integration, J. Inst. Math. Applics, 26(1980), 151-173. [2] B. Cipra, What’s Happening in the Mathematical Sciences, American Mathematical Society, Providence, RI, 1996. [3] I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, 1992.

502

14

T.X. He

[4] P.J. Davis, Double integrals expressed as single integrals or interpolatory functions, J. Appro. Theory, 5(1972), 276-307. [5] P.J. Davis and P. Rabinowitz, Methods of Numerical Integration, Academic Press, New York, 1975. [6] J.D. Federenko, A formula for the approximate evaluation of double integrals, Dopovidi. Akad. Nauk Ukrain. RSR, (1964), 1000-1005. [7] A. Ghizzetti and A. Ossicini, Quadrature Formula, Academic Press, New York, 1970. ¨ [8] W. Gr¨ obner, Uber die Konstruktion von Systemen orthogonaler Polynome in ein-und-zwei dimensionalen Bereiche, Monatsh. Math., 52(1948), 48-54. [9] T.X. He, Boundary-type quadrature formulas without derivative terms, J. Math. Res. Expo., 2(1981), 93-102. [10] T.X. He, On the algebraic method for constructing the boundary-type quadrature formulas, Comp. Math. (China), (1985), No.1, 1-5. [11] T.X. He, Spline interpolation and its wavelet analysis, Proceedings of the Eighth International Conference on Approximation Theory, C.K. Chui and L.L. Schumaker (eds.), World Scientific Publishing Co., Inc., 1995, 143-150. [12] T.X. He, Construction of boundary quadrature formulas using wavelets, Wavelet Applications in Signal and Image Processing III, SPIE-The International Society for Optical Engineering, A.F. Laine and M.A. Unser (eds.) 1995, 825-836. [13] T.X. He, Short time Fourier transform, integral wavelet transform, and wavelet functions associated with splines, J. Math. Anal. & Appl., 224(1998), 182-200. [14] T.X. He, Boundary quadrature formulas and their applications, Handbook of Analytic-Computational Methods in Applied Mathematics, G. Anastassiou (ed.), Chapman & Hall/CRC, New York, 2000, 773-800. [15] T.X. He, Dimensionality Reducing Expansion of Multivariate Integration, Birkh¨ auser, Boston, March, 2001. [16] E. Hern´ andez and G. Weiss, A First Course on Wavelets, CRC Press, New York, 1996. [17] L.C. Hsu, On a method for expanding multiple integrals in terms of integrals in lower dimensions, Acta. Math. Acad. Sci. Hung., 14(1963). 359-367. [18] L.C. Hsu and T.X. He, On the minimum estimation of the remainders in dimensionality lowering expansions with algebraic precision, J. Math. (Wuhan), 2,3(1982), 247-255. [19] L.C. Hsu and Y.S. Zhou, Numerical integration in high dimensions, Computational Methods Series. Science Press, Beijing, 1980.

503

Boundary type quadrature formulas

15

[20] L.C. Hsu and Y.S. Zhou, Two classes of boundary type cubature formulas with algebraic precision, Calcolo, 23(1986), 227-248. [21] D.V. Ionescu, Generalization of a quadrature formula of N. Obreschkoff for double integrals (Romanian), Stud. Cerc. Mat., 17(1965), 831-841. [22] E. G. Kalnins, W. Miller Jr., and M. V. Tratnik, Families of orthogonal and biorthogonal polynomials on the N -sphere. SIAM J. Math. Anal. 22 (1991), no. 1, 272–294. [23] P. Keast and J.C. Diaz, Fully symmetric integration formula for the surface of the sphere in S dimension, SIAM J. Numer. Anal. 20(1983), 406-419. [24] L.J. Kratz, Replacing a double integral with a single integral, J. Appro. Theory, 27(1979), 379-390. [25] V. I. Lebedev and A. L. Skorokhodov, Quadrature formulas for a sphere of orders 41, 47 and 53. (Russian) Dokl. Akad. Nauk 324 (1992), no. 3, 519–524; translation in Russian Acad. Sci. Dokl. Math. 45 (1992), no. 3, 587–592 ¨ Toime[26] M. Levin, On a method of evaluating double integrals, Tartu Riikl. Ul. tised, 102(1961), 338-341. [27] M. Levin, Extremal problems connected with a quadrature formula, Eesti NSV Tead. Akad. Toimetised F¨ uu ¨s-Mat. Tehn. Seer., 12(1963), 44-56. [28] H.N. Mhaskar, F.J. Narcowich, and J.D. Ward, Quadrature formulas on spheres using scattered data, preprint, 2000. [29] N. Obreschkoff, Neue Quadraturformeln, Abhandl. d. preuss. Akad. d. Wiss., Math. Natur. wiss. K1., 4(1940), 1-20. [30] M. Sadowsky, A formula for the approximate computation of a triple integral, Amer. Math. Monthly, 47(1940), 539-543. [31] D.D. Stancu, Sur quelques formules generales de quadrature du type GaussChristoffel, Mathematica (Cluj), 1(1959), 167-182. [32] D. D. Stancu and A.H. Stroud, Quadrature formulas with simple Gaussian nodes and multiple fixed nodes, Math. Comp., 17(1963), 384-394. [33] A.H. Stroud, Approximate Calculation of Multiple Integrals, Prentice-Hall, Englewood Cliffs, N.H., 1971. [34] G.G. Walter, Wavelets and Other Orthogonal Systems with Applications, CRC Press, Ann Arbor, 1994. [35] Y. Xu, Orthogonal polynomials and cubature formulae on spheres and on balls, SIAM J. Math. Anal., 29(1998), 779-793. [36] Y.S. Zhou and T.X. He, Higher dimensional Korkin Theorem,

JOURNAL 504 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 504-527, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Theoretical Analysis and Numerical Realization of Bioluminescence Tomography Rongfang Gong1,

Xiaoliang Cheng2

and

Weimin Han3

Abstract. Mathematically, bioluminescence tomography (BLT) is an ill-posed inverse source problem. In this paper, a new formulation for the BLT problem is developed. It is used to explain rigorously the reason behind the loss of the continuous dependence of the light source function solution on the measurements. On the basis of the formulation, an approximation using finite element method is provided. Some error estimates for the reconstructed light source function are obtained. By using adjoint equations, a simple but efficient iterative scheme is explored. Several numerical examples are presented to test the performance of the formulation and the iterative scheme. Keywords. Bioluminescence tomography, ill-posed problem, finite element method

1

Introduction

The function of molecular imaging is to help study biological processes in vivo at the cellular and molecular levels, see e.g. [8, 25, 29, 31, 32]. It may non-invasively differentiate normal from diseased conditions. While some classic techniques do reveal information on micro-structures of the tissues, only recently have molecular probes been developed along with associated imaging technologies that are sensitive and specific for detecting molecular targets in animals and humans. A molecular probe has a high affinity for attaching itself to a target molecule and a tagging ability with a marker molecule that can be tracked outside a living body. Molecular imaging contains a lot of modalities, see [7] for detail. Among them, optical imaging, especially fluorescence and bioluminescence imaging, has attracted remarkable attention for its unique advantages regarding performance and cost-effectiveness. Bioluminescence tomography (BLT) [12, 33, 34] is an emerging and promising bioluminescence imaging modality. The major issue of BLT is the determination of the distribution of in vivo bioluminescent source. In BLT, we reconstruct an internal bioluminescent source from the measured bioluminescent signal on the external surface of a small animal. The problem of determining the photon density on the small animal surface from the bioluminescent 1

Department of Mathematics, Zhejiang University, Hangzhou 310027, P.R. China. [email protected] 2 Department of Mathematics, Zhejiang University, Hangzhou 310027, P.R. China. [email protected] 3 Department of Mathematics, University of Iowa, Iowa City, IA 52242, U.S.A. [email protected]

1

E-mail: E-mail: E-mail:

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

505

source distribution within the animal requires accurate representation of photon transport in biological tissue. Photon propagation in biological tissue is governed by the radiative transfer equation (RTE) [24]. However, the RTE is highly dimensional and presents a serious challenge for its accurate numerical simulations given the current level of development in computer software and hardware. Because the mean-free path of the photon is between 500 nm and 1000 nm in biological tissues, which is very small compared to the size of a typical object in this context, the predominant phenomenon is scattering. Usually a diffusion approximation of the RTE is employed [1, 30]. Based on the diffusion approximation equation, many theoretical analysis and numerical methods are explored, see e.g. [6, 7, 10, 18, 27]. To improve the accuracy of the reconstructed light source function, multispectral systems are developed [4, 11, 19, 21, 28]. We refer to [20, 35] for the BLT problem related to the optical properties issue. A recent survey of biomedical background, mathematical theory and numerical approximation for BLT is [22]. In this paper, we study the BLT problem through a new perspective that leads to better convergence behavior for its numerical solution. In Section 2, a simplified version of the BLT problem is reduced to an operator equation. The operator is compact, which explains why the source function in the BLT problem does not depend continuously on measurements. In Section 3, based on the discussion in Section 2, a new Tikihonov-type regularized formulation is provided to compute the light source function. In the formulation, the measurements on the boundary are transferred to a knowledge in the problem domain, which makes the BLT problem more regular. The well-posedness of the new formulation, including the solution existence, uniqueness and continuous dependence, is shown. The limiting behavior is discussed for the regularized solution when the regularization approaches zero. In Section 4, a finite element approximation for the BLT problem based on the new formulation is studied. Specifically, we use piecewise constants to approximate the source function and standard linear elements to approximate the state function. Error estimates are derived with improved convergence orders compared to those found in [16, 18]. In Subsection 5.1, an iterative scheme for the BLT reconstruction based on the given formulation is shown. By introducing adjoint equations, we avoid computing the inversion of an elliptic partial differential operator. Some detail on the implementation is also given. Two numerical examples are presented in Subsection 5.2 to show the numerical performance of the method discussed in this paper.

2

506

2

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

Ill-posedness of the BLT problem

We first introduce the classical formulation of the diffusion based BLT problem. Let the biological medium occupies a non-empty, open and bounded set Ω ⊂ Rd , d ≤ 3. The boundary Γ of the domain Ω is assumed Lipschitz continuous. Denote by D = (3 (µa + µ′s ))−1 with absorption coefficient µa and reduced scattering coefficient µ′s , and denote by ∂ν the outward normal differentiation operator. Then the classical formulation of the BLT problem based on the diffusion approximation is the following ([7, 18]). Problem 2.1 Given D > 0, µa ≥ 0, g1 and g2 , suitably smooth, find a bioluminescent source p such that the solution u of the boundary-value problem − div(D∇u) + µa u = pχΩ0

D∂ν u = g2

in Ω,

(2.1)

on Γ

(2.2)

satisfies u = g1

on Γ.

(2.3)

Here Ω0 , known as the permissible region, is a measurable subset of Ω, and χΩ0 is the characteristic function of Ω0 : χΩ0 (x) equals 1 for x ∈ Ω0 , and equals 0 for x ∈ Ω/Ω0 . It is shown in [18] that the pointwise BLT Problem 2.1 may have infinitely many solutions or may have no solution, depending on the choice of the function set where we look for the source function. It is also mentioned there even when the solution existence and uniqueness issues could be settled, the source function solution does not depend continuously on the measurement. This section is devoted to a rigorous proof of this instability statement for the BLT problem in the simplified case where the permissible region is the entire domain and the admissible set for the source function is the entire L2 (Ω) space. We use standard notations for Sobolev spaces ([2]). Denote V = H 1 (Ω), V0 = H01 (Ω), Q = L2 (Ω0 ). Our basic assumptions on the data throughout the paper are: D ∈ L∞ (Ω) and D ≥ D0 for some constant D0 > 0, µa ∈ L2 (Ω) and µa ≥ µ0 for some constant µ0 > 0, g1 ∈ H 1/2 (Γ), and g2 ∈ L2 (Γ). We further let Vg1 = {v ∈ V | v = g1 on Γ}. Define a(u, v) =

Z



(D∇u · ∇v + µa u v) dx ∀ u, v ∈ V.

(2.4)

Then a(·, ·) is symmetric, continuous and coercive on V . Therefore, by the Lax-Milgram Lemma ([2, 13]), for any q ∈ Q, the problems uD (q, g1 ) ∈ Vg1 ,

a(uD (q, g1), v) = (q, v)Q

3

∀ v ∈ V0

(2.5)

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

507

and uN (q, g2 ) ∈ V,

a(uN (q, g2 ), v) = (q, v)Q + (g2 , v)L2 (Γ)

∀v ∈ V

(2.6)

each has a unique solution. Moreover, kuD (q, g1 )kV ≤ c (kqkQ + kg1 kH 1/2 (Γ) ),

(2.7)

kuN (q, g2 )kV ≤ c (kqkQ + kg2 kL2 (Γ) ).

(2.8)

We write uD (q) = uD (q, 0), u eD (g1 ) = uD (0, g1 ), uN (q) = uN (q, 0), and u eN (g2 ) = uN (0, g2 ). Then uD (q, g1 ) = uD (q) + u eD (g1 ) and uN (q, g2) = uN (q) + u eN (g2 ). Next we introduce a weak formulation for Problem 2.1 based on both Dirichlet and Neumann boundary problems (2.5) and (2.6). Define s(p, q) = (uD (p) − uN (p), uD (q) − uN (q))L2 (Ω) l(q) = (e uN (g2 ) − u eD (g1 ), uD (q) − uN (q))L2 (Ω)

∀ p, q ∈ Q,

∀ q ∈ Q.

(2.9)

Apparently, the bilinear form s is symmetric and positive semi-definite over Q. Moreover, by Schwarz inequality, trace theorem [3, Theorem 1.6.6], together with the bounds (2.7) and (2.8), we conclude that both s and l are continuous on Q. Denote by S : Q → Q the operator through the relation (S p, q)Q = s(p, q) for any p, q ∈ Q. Then S is symmetric, positive semi-definite and continuous. Moreover, from Riesz representation theorem, there is an element denoted again by l ∈ Q such that (l, q)Q = l(q) for any q ∈ Q. We let Ω0 = Ω in the rest of this section, and introduce a new problem. Problem 2.2 Find p ∈ Q such that Sp = l

in Q.

(2.10)

We have the following result. Proposition 2.3 If p ∈ Q solves Problem 2.1, then it is a solution of Problem 2.2. Proof. Let p∗ be a solution of Problem 2.1. Then, uD (p∗ , g1 ) = uN (p∗ , g2 ). So

Thus,

uD (p∗ ) − uN (p∗ ) = u eN (g2 ) − u eD (g1 ).

(uD (p∗ ) − uN (p∗ ), uD (q) − uN (q))L2 (Ω) = (e uN (g2 ) − u eD (g1 ), uD (q) − uN (q))L2 (Ω) 4

∀ q ∈ Q,

508

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

i.e., s(p∗ , q) = l(q) ∀ q ∈ Q. Hence, p∗ is a solution of Problem 2.2. It is possible to demonstrate the equivalence between Problem 2.1 and Problem 2.2. For this purpose, we introduce the following stronger smoothness assumptions on the data: Γ ∈ C 1,1 ,

D ∈ C 0,1 (Ω),

g1 ∈ H 3/2 (Γ),

g2 ∈ H 1/2 (Γ).

(2.11)

We recall the following result ([18]). Lemma 2.4 Let Ω ⊂ Rd be an open bounded subset with a C 1,1 boundary and u ∈ H 2 (Ω). Then there are infinitely many functions v ∈ H 2 (Ω) such that γv = γu,

γ∂ν v = γ∂ν u.

Here γ stands for the trace operator. We have the following converse of Proposition 2.3. Proposition 2.5 Assume (2.11). Then a solution of Problem 2.2 solves Problem 2.1. Proof. Let p∗ ∈ Q be a solution of Problem 2.2: (Sp∗ , q)Q = (l, q)Q

∀q∈Q

or equivalently (uD (p∗ , g1 ) − uN (p∗ , g2 ), uD (q) − uN (q))L2 (Ω) = 0 ∀ q ∈ Q.

(2.12)

Define an operator E from H 2 (Ω) to L2 (Ω): E u = −div(D∇u) + µa u for u ∈ H 2 (Ω), and set u∗ = uD (p∗ , g1 ) − uN (p∗ , g2 ). Then u∗ ∈ H 2 (Ω) from (4.10) and (4.11) below, and E u∗ = 0. Denote g1,∗ = γu∗ and g2,∗ = γD∂ν u∗ . Then, g1,∗ ∈ H 3/2 (Γ) and g2,∗ ∈ H 1/2 (Γ). By Lemma 2.4, there exists a function uD,∗ 6= u∗ in H 2 (Ω) such that uD,∗ = 0,

D∂ν uD,∗ = g2,∗

on Γ.

Let q∗ = E uD,∗ ∈ Q and uN,∗ = uD,∗ − u∗ ∈ H 2 (Ω). Then uD,∗ = uD (q∗ ) and u∗ = uD,∗ − uN,∗ . From E uN,∗ = E uD,∗ − E u∗ = q∗ − 0 = q∗ in Ω and D∂ν uN,∗ = D∂ν uD,∗ − D∂ν u∗ = g2,∗ − D∂ν u∗ = 0 on Γ, 5

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

509

we know that uN,∗ = uN (q∗ ), i.e., there exists a q∗ ∈ Q such that uD (q∗ ) − uN (q∗ ) = u∗ . Substitute q∗ for q in (2.12) to give uD (p∗ , g1 ) = uN (p∗ , g2 ), which shows that p∗ is a solution of Problem 2.1. We now present a result on the compactness of the operator S. Theorem 2.6 The operator S : Q → Q is compact. Proof. Let {pn }n ⊂ Q be bounded. Then there is a subsequence, denoted again by {pn }n , which converges weakly in Q to some element p∗ ∈ Q because of the reflexivity of space Q. Let unD = uD (pn ), unN = uN (pn ), i.e., unD ∈ V0 , unN ∈ V , and a(unD , v) = (pn , v)Q a(unN , v)

n

= (p , v)Q

∀ v ∈ V0 ,

(2.13)

∀ v ∈ V.

(2.14)

Then {unD }n and {unN }n are bounded in V from the properties (2.7) and (2.8). Hence, we can extract two further subsequences, denoted again by {unD }n and {unN }n , which converge weakly in V and strongly in Q to u∗D ∈ V0 and u∗N ∈ V , respectively. Let n → ∞ in (2.13) and (2.14) to get u∗D = uD (p∗ ) and u∗N = uN (p∗ ). Strong convergence of {unD }n to u∗D in V follows from Z n ∗ 2 n ∗ n ∗ α kuD − uD kV ≤ a(uD − uD , uD − uD ) = (pn − p∗ ) (unD − u∗D ) dx → 0 Ω

as n → ∞. Similarly,

unN



u∗N

as n → ∞.

Denote sn = Spn . Then {sn }n is bounded in Q. Repeating the above argument, we conclude that there exists an element s∗ ∈ Q such that sn ⇀ s∗ in Q,

uD (sn ) → uD (s∗ ), uN (sn ) → uN (s∗ ) in V as n → ∞.

Since (sn , q)Q = (Spn , q)Q = s(pn , q) = (unD − unN , uD (q) − uN (q))L2 (Ω)

∀ q ∈ Q,

we have s∗ = Sp∗ by letting n → ∞. Consequently, strong convergence of sn to s∗ in Q follows from ksn − s∗ k2Q = (Spn − Sp∗ , sn − s∗ )Q = s(pn − p∗ , sn − s∗ )

= (unD − u∗D − (unN − u∗N ), uD (sn ) − uD (s∗ ) − (uN (sn ) − uN (s∗ )))L2 (Ω)

≤ (kunD − u∗D kV + kunN − u∗N kV ) (kuD (sn ) − uD (s∗ )kV + kuN (sn ) − uN (s∗ )kV ) →0

as n → ∞, and the proof is complete. Compactness of S explains the instability of solutions of the BLT problem with respect to the measurement data. 6

510

3

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

A new regularized reformulation for the BLT problem

In this section, a Tikhonov-type regularization method is used to reconstruct numerically the source function of the BLT problem. We seek the source function in an admissible set Qad ⊂ Q, which is assumed to be bounded, closed and convex set. Usually, we take Qad = {q ∈ Q | q ≥ 0 a.e. in Ω0 }. For any ε ≥ 0, define a bilinear form sε (·, ·) over Q × Q: sε (p, q) = s(p, q) + ε (p, q)Q

∀ p, q ∈ Q.

(3.1)

Observe that sε is symmetric, continuous and coercive on Q × Q for each ε > 0. Note that Problem 2.2 is equivalent to minimizing the functional 21 s(q, q) − l(q) over the space Q. Thus, we introduce the following regularized reformulation of the BLT problem. Problem 3.1 Find pε ∈ Qad such that Jε (p) = min Jε (q), q∈Qad

where

1 Jε (q) = sε (q, q) − l(q). 2

Easily, 1 ε 1 Jε (q) = kuD (q, g1) − uN (q, g2)k2L2 (Ω) − ke uD (g1 ) − u eN (g2 )k2L2 (Ω) + kqk2Q . 2 2 2

We note that Problem 3.1 is equivalent to finding pε ∈ Qad such that sε (pε , q − pε ) ≥ l(q − pε ) ∀ q ∈ Qad .

(3.2)

If Qad is a subspace of Q, (3.2) reduces to sε (pε , q) = l(q) ∀ q ∈ Qad . Similar to S, we denote by Sε : Q → Q the operator such that (Sε p, q)Q = sε (p, q) for any p, q ∈ Q. Then Sε = S + ε I is symmetric, continuous and coercive for each ε > 0, where I stands for the identity operator over Q. Next we discuss the well-posedness of Problem 3.1. Standard arguments ([2, 26]) lead to the next result. 7

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

511

Proposition 3.2 Problem 3.1 admits a unique solution pε in Q. Regarding solution stability of Problem 3.1, we have the following result. Theorem 3.3 The solution pε of Problem 3.1 depends continuously on ε > 0, D ∈ L∞ (Ω), µa ∈ L∞ (Ω), g1 ∈ H 1/2 (Γ) and g2 ∈ L2 (Γ). To prove Theorem 3.3, we need some preparations. Let a(·, ·) be given in (2.4). Assume D δ ∈ L∞ (Ω) and µδa ∈ L∞ (Ω) such that kD δ kL∞ (Ω) ≤ δ D0 and kµδa kL∞ (Ω) ≤ δ µ0 for a small positive constant δ to be specified below. For any q ∈ Q, let uδD (q, g1 ) ∈ Vg1 and uδN (q, g2 ) ∈ V be the unique solutions of the problems aδ (u, v) = (q, v)Q

∀ v ∈ V0 ,

aδ (u, v) = (q, v)Q + (g2 , v)L2 (Γ)

(3.3) ∀ v ∈ V,

(3.4)

respectively, where δ

a (u, v) =

Z



((D + D δ ) ∇u∇v + (µa + µδa ) u v) dx

(3.5)

and δ < 1. Again, we rewrite uδD (q), u eδD (g1 ), uδN (q) and u eδN (g2 ) for uδD (q, 0), uδD (0, g1), uδN (q, 0) and uδN (0, g2 ) respectively. Then we have the following estimates. Lemma 3.4 For a properly small δ > 0 and any q ∈ Q, there exists a constant c such that kuδD (q) − uD (q)kV ≤ c δkqkQ,

kuδN (q) − uN (q)kV ≤ c δkqkQ ,

ke uδD (g1 ) − u eD (g1 )kV ≤ c δkg1 kH 1/2 (Γ) , ke uδN (g2 ) − u eN (g2 )kV ≤ c δkg2kL2 (Γ) .

Proof. We prove the first estimate, the others can be verified similarly. We recall that a(uD (q), v) = (q, v)Q

∀ v ∈ V0 .

(3.6)

Subtract (3.6) from (3.3) to get a(uδD (q)

− uD (q), v) = −

Z



(D δ ∇uδD (q)∇v + µδa uδD (q) v) dx ∀ v ∈ V0 .

Because uδD (q) − uD (q) ∈ V0 for any q ∈ Q, we can take v for uδD (q) − uD (q) in above equation. Then by using of the coercivity of bilinear form a(·, ·) and Schwarz inequality, we obtain kuδD (q) − uD (q)kV ≤ c δkuδD (q)kV . 8

(3.7)

512

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

For δ < 1, from the regularity property (2.7), we have kuδD (q)kV ≤ c kqkQ which reduces (3.7) to kuδD (q) − uD (q)kV ≤ c δkqkQ . Therefore, we complete the proof. Let εδ ∈ R, g1δ ∈ H 1/2 (Γ) and g2δ ∈ L2 (Γ) be such that |εδ | ≤ ε δ, kg1δ kH 1/2 (Γ) ≤ δkg1 kH 1/2 (Γ) and kg2δ kL2 (Γ) ≤ δkg2 kL2 (Γ) , and define bilinear form sδε (·, ·) and linear functional lδ (·) by sδε (p, q) = (uδD (p) − uδN (p), uδD (q) − uδN (q))L2 (Ω) + (ε + εδ ) (p, q)Q lδ (q) = (e uδN (g2 + g2δ ) − u eδD (g1 + g1δ ), uδD (q) − uδN (q))L2 (Ω)

∀ q ∈ Q.

∀ p, q ∈ Q,

Note that u eδN (g2 + g2δ ) = u eδN (g2 ) + u eδN (g2δ ) and u eδD (g1 + g1δ ) = u eδD (g1 ) + u eδD (g1δ ). Denote by pδε the solution of Problem 3.1 with sε and l replaced by sδε and lδ respectively. For 0 ≤ δ < 1, sδε is symmetric, bounded and coercive on Q, and lδ is continuous on Q. Hence, for 0 ≤ δ < 1, pδε uniquely exists. Set ∆sδ (p, q) = sδε (p, q) − sε (p, q) and ∆lδ (q) = lδ (q) − l(q). Note that ∆sδ (p, q) is independent of ε. We have the following bounds on ∆sδ and ∆lδ . Lemma 3.5 There is a constant c > 0 such that for any p, q ∈ Q and 0 ≤ δ < 1, |∆sδ (p, q)| ≤ c δ kpkQ kqkQ ,

(3.8)

δ

|∆l (q)| ≤ c δ (kg1 kH 1/2 (Γ) + kg2 kL2 (Γ) )kqkQ .

(3.9)

Proof. From definitions of sε and sδε , for any p, q ∈ Q, we have ∆sδ (p, q) = (uδD (p) − uD (p) − (uδN (p) − uN (p)), uδD (q) − uδN (q))L2 (Ω)

+ (uD (p) − uN (p), uδD (q) − uD (q) − (uδN (q) − uN (q)))L2 (Ω) + εδ (p, q)Q ,

∆lδ (q) = (e uδN (g2 ) − u eN (g2 ) − (e uδD (g1 ) − u eD (g1 )), uδD (q) − uδN (q))L2 (Ω)

+ (e uN (g2 ) − u eD (g1 ), uδD (q) − uD (q) − (uδN (q) − uN (q)))L2 (Ω) + (e uδN (g2δ ) − u eδD (g1δ ), uδD (q) − uδN (q))L2 (Ω) .

Consequently, (3.8) and (3.9) follow immediately from Schwarz inequality, Lemma 3.4, and the regularity properties (2.7), (2.8). The proof is complete. Proof of Theorem 3.3. Now we can prove the stability Theorem 3.3. We recall that pε ∈ Qad and pδε ∈ Qad are such that sε (pε , q − pε ) ≥ l(q − pε ) ∀ q ∈ Qad

(3.10)

sδε (pδε , q − pδε ) ≥ lδ (q − pδε ) ∀ q ∈ Qad

(3.11)

and

9

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

513

respectively. Replace q = pδε in (3.10) and q = pε in (3.11) and add them to get sε (pδε − pε , pδε − pε ) ≤ −(sδε (pδε − pε , pδε ) − sε (pδε − pε , pδε )) + lδ (pδε − pε ) − l(pδε − pε ) ≡ −∆sδ (pδε − pε , pδε ) + ∆lδ (pδε − pε ).

From the coercivity of sε for ε > 0 together with Lemma 3.5, we have α(ε)kpδε − pε k2Q ≤ sε (pδε − pε , pδε − pε )

≤ |∆sδ (pδε − pε , pδε )| + |∆lδ (pδε − pε )|

≤ c δkpδε − pε kQ kpδε kQ + kg1 kH 1/2 (Γ) + kg2kL2 (Γ)



≤ c δkpδε − pε kQ kpδε − pε kQ + kpε kQ + kg1 kH 1/2 (Γ) + kg2 kL2 (Γ)

or  (α(ε) − c δ)kpδε − pε kQ ≤ c δ kpε kQ + kg1 kH 1/2 (Γ) + kg2 kL2 (Γ) ,

 (3.12)

where α(ε) is a positive coercivity constant for bilinear form sε which may depend on ε. For a small enough δ, there is a positive constant c(ε) independent of δ such that α(ε) − c δ ≥ c(ε). Then (3.12) reduces to  kpδε − pε kQ ≤ c δ kpε kQ + kg1 kH 1/2 (Γ) + kg2 kL2 (Γ) which shows the convergence pδε → pε in Q when δ → 0.



We now explore the limit behavior of the solution of Problem 3.1 as the regularization parameter ε → 0. Denote by Z the solution set of Problem 3.1 with ε = 0. Then if it is non-empty, Z is a closed, convex subset of space Q. Denote by p∗ the unique element in Z with minimal Q norm, that is, kp∗ kQ = inf kpkQ . (3.13) p∈Z

We have the following convergence result; its proof is similar to that of [18, Proposition 3.5]. Theorem 3.6 Assume the solution set Z is nonempty. Then pε → p∗ in Q,

4

as ε → 0.

A finite element approximation

In this section, we consider the problem of approximating the solution of Problem 3.1. Standard finite element method (FEM) is applied to discretize this problem. We use constant finite element space for an approximation of the light source space Q. Specifically, let {T0,H }H be a regular family of triangulations over domains Ω0 ⊂ Ω with meshsize H > 0. For each triangulation T0,H = {KH }, define finite element space QH = {T ∈ H Q | T |KH ∈ P0 (K), ∀ KH ∈ T0,H }, and set QH ad = Qad ∩ Q . Here we use Pk for the space of all polynomials of degree ≤ k. Then we can define the following discrete problem. 10

514

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

H Problem 4.1 Find a function pH ε ∈ Qad such that H H H H sε (pH ε , q − pε ) ≥ l(q − pε )

∀ q H ∈ QH ad .

(4.1)

Similar to the continuous case, we have the following well-posedness result for Problem 4.1. Proposition 4.2 For each H > 0, Problem 4.1 has a unique solution pH ε which depends continuously on all data. As for error estimate of the finite element solution pH ε of Problem 4.1, we first derive an abstract error estimate. Theorem 4.3 There exists a positive constant c which is independent of ε > 0 and H > 0 such that 1/2

1/2

H H ε1/2 kpH ε − pε kQ ≤ c inf (kq − pε kQ + kSε pε − lkQ kq − pε kQ ). q H ∈QH ad

(4.2)

Proof. By definition, H H H 2 H 2 sε (pH ε − pε , pε − pε ) = kuD (pε − pε ) − uN (pε − pε )kL2 (Ω) + εkpε − pε kQ .

(4.3)

Adding (4.1) and (3.2) with q = pH ε , we obtain H H H H 0 ≤ sε (pε , pH ε − pε ) + sε (pε , q − pε ) − l(q − pε ).

Use this inequality, H H H H sε (pH ε − pε , pε − pε ) ≤ sε (pε , q − pε ) − l(q − pε ).

Thus, H sε (pH ε − p ε , pε − p ε )

H H ≤ sε (pH ε − pε , q − pε ) + Sε pε − l, q − pε



Q H H H ≤ kuD (pε − pε ) − uN (pε − pε )kL2 (Ω) kuD (q − pε ) − uN (q H H H + εkpH ε − pε kQ kq − pε kQ + kSε pε − lkQ kq − pε kQ .

− pε )kL2 (Ω)

Combining this inequality with (4.3), we deduce that H 2 H 2 kuD (pH ε − pε ) − uN (pε − pε )kL2 (Ω) + εkpε − pε kQ h i H H 2 H 2 H ≤ c kuD (q − pε ) − uN (q − pε )kL2 (Ω) + εkq − pε kQ + kSε pε − lkQ kq − pε kQ ,

from which we conclude (4.2). A direct consequence from Theorem 4.3 is in the following. 11

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

515

1 Corollary 4.4 pH ε → pε in Q when H → 0; moreover, if pε ∈ H (Ω0 ), we have 1/2 ε1/2 kpH ε − pε k Q ≤ c H

(4.4)

with c depending on |pε |H 1 (Ω0 ) but independent of ε > 0 and H > 0. We note that the convergence order in (4.4) is not optimal and an improvement is possible. In fact, using the technique in [18, Lemma 4.7], we have Proposition 4.5 There is a constant c independent of H > 0 such that 1/2

1/2 ε1/2 kpH kΠH pε − pε kQ . ε − pε k Q ≤ c H

Consequently, if pε ∈ H 1 (Ω0 ), 1/2

ε1/2 kpH ε − pε kQ ≤ c H |pε |H 1 (Ω0 ) . An examination of the definitions of sε (pH , q) and l(q) shows that we need further to approximate such terms like uD (q, g1), uD (q), and u eD (g1 ). Continuous piecewise linear functions will be utilized for this purpose. Let {Th }h be a regular family of triangulations over domains Ω ⊂ Rd with a meshsize h > 0. For each triangulation Th = {Kh }, define finite element spaces V h and V0h as follows. V h , {v ∈ C(Ω) | v |Kh ∈ P1 , ∀ Kh ∈ Th },

V0h = V h ∩ V0 .

For simplicity, in the following discussion, we further assume (2.11). We will use the same symbol g1 ∈ H 2 (Ω) for its trace g1 ∈ H 3/2 (Γ). Denote by ΠV h v for the piecewise linear interpolant of v ∈ H 2 (Ω) and let g1h = ΠV h g1 ∈ V h . Moreover, we will use the symbol g1h + V0h for the set {v ∈ V h | v(ai ) = g1 (ai ) ∀ vertex ai ∈ Kh ∩ Γ, ∀ Kh ∈ Th }. For each q ∈ Q, denote by uhD (q, g1h ) ∈ g1h + V0h the unique solution of a(u, v) = (q, v)Q

∀ v ∈ V0h

(4.5)

and by uhN (q, g2) ∈ V h the unique solution of a(u, v) = (q, v)Q + (g2 , v)L2 (Γ)

∀ v ∈ V h,

(4.6)

where a(·, ·) is defined in (2.4). Similar to the continuous case, we use the symbols uhD (q), u ehD (g1h ), uhN (q) and u ehN (g2 ) for uhD (q, 0), u ehD (0, g1h), uhN (q, 0) and uhN (0, g2), respectively. 12

516

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

Now we give a discrete counterpart of the bilinear form (3.1) and the linear function (2.9). Given ε ≥ 0, for any p, q ∈ Q, shε (p, q) = sh (p, q) + ε (p, q)Q = (uhD (p) − uhN (p), uhD (q) − uhN (q))L2 (Ω) + ε (p, q)Q , h

l (q) =

(e uhN (g2 )



u ehD (g1h ), uhD (q)



uhN (q))L2 (Ω) .

(4.7) (4.8)

Then shε is symmetric and coercive for each ε > 0, and both shε and lh are uniformly bounded with respect to h. We now introduce a full discretization for Problem 4.1. Problem 4.6 Find ph,H ∈ QH ε ad such that H h,H h H h,H shε (ph,H ε , q − pε ) ≥ l (q − pε )

∀ q H ∈ QH ad .

(4.9)

The following result holds. Proposition 4.7 Problem 4.6 admits a unique solution ph,H in QH ε ad , and the solution depends continuously on the data. The rest of this section is devoted to an error estimation for Problem 4.6. We first present some preliminary results. From [17, Theorem 2.4.2.5 and Proposition 2.5.2.3], under the assumptions (2.11), we have the regularity properties: kuD (q, g1 )kH 2 (Ω) ≤ c (kqkQ + kg1 kH 3/2 (Γ) ),

kuN (q, g2 )kH 2 (Ω) ≤ c (kqkQ + kg2 kH 1/2 (Γ) ).

(4.10) (4.11)

Using these regularity properties together with Aubin-Nitche trick, we have linear finite element error estimates in the following (see [9, Theorem 3.2.5] for detail). Lemma 4.8 For any q ∈ Q, g1 ∈ H 3/2 (Γ) and g2 ∈ H 1/2 (Γ), there exists a constant c independent of q, g1, g2 and h such that kuhD (q) − uD (q)kL2 (Ω) ≤ c h2 kqkQ ,

kuhN (q) − uN (q)kL2 (Ω) ≤ c h2 kqkQ ,

ke uD (g1 ) − u ehD (g1h )kL2 (Ω) ≤ c h2 kg1 kH 3/2 (Γ) , ke uN (g2 ) − u ehN (g2 )kL2 (Ω) ≤ c h2 kg2 kH 1/2 (Γ) . 13

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

517

Let ∆sh (p, q) = sh (p, q) − s(p, q) and ∆lh (q) = lh (q) − l(q) for p, q ∈ Q. Then we have estimates for ∆sh and ∆lh as follows. Lemma 4.9 For any p, q ∈ Q, there is a constant c which is independent of p, q, g1, g2 and h > 0 such that |∆sh (p, q)| ≤ c h2 kpkQ kqkQ , h

2

|∆l (q)| ≤ c h (kg1 kH 3/2 (Γ) + kg2 kH 1/2 (Γ) )kqkQ .

(4.12) (4.13)

Proof. We rewrite ∆sh (p, q) in the following way: ∆sh (p, q) = (uhD (p) − uhN (p), uhD (q) − uhN (q))L2 (Ω) − (uD (p) − uN (p), uD (q) − uN (q))L2 (Ω) = (uhD (p) − uhN (p), uhD (q) − uD (q) − (uhN (q) − uN (q)))L2 (Ω)

+ (uhD (p) − uD (p) − (uhN (p) − uN (p)), uD (q) − uN (q))L2 (Ω) .

Then from Lemma 4.8, and together with regularity properties for uhD (p), uD (q), uhN (p) and uN (q), we obtain (4.12). Similarly, we decompose ∆lh as follows: ∆lh (q) = (e uhN (g2 ) − u ehD (g1h ), uhD (q) − uhN (q))L2 (Ω) − (e uN (g2 ) − u eD (g1 ), uD (q) − uN (q))L2 (Ω) = (e uhN (g2 ) − u ehD (g1h ), uhD (q) − uD (q) − (uhN (q) − uN (q)))L2 (Ω)

+ (e uhN (g2 ) − u eN (g2 ) − (e uhD (g1h ) − u eD (g1 )), uD (q) − uN (q))L2 (Ω) .

From the regularity properties of u ehN (g2 ), u ehD (g1h ), uD (q) and uN (q), by use of Lemma 4.8, we obtain (4.13). We now present an error bound for the finite element solution from Problem 4.6. Theorem 4.10 There exists a constant c > 0, independent of h, such that 2 εkph,H − pH ε ε kQ ≤ c h (kpε kQ + kg1 kH 3/2 (Γ) + kg2 kH 1/2 (Γ) ).

(4.14)

Proof. From (4.9) with q H = pH ε , we have 2 h h,H h,H εkph,H − pH − pH − pH ε ε kQ ≤ sε (pε ε , pε ε )

H h,H h h,H H h,H = shε (pH ε , pε − pε ) − sε (pε , pε − pε ) H h,H h h,H ≤ shε (pH − pH ε , pε − pε ) + l (pε ε ).

Write h h,H h,H lh (ph,H − pH − pH − pH ε ε ) = ∆l (pε ε ) + l(pε ε ).

14

(4.15)

518

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

From (4.1) with q H = ph,H ε , and together with (4.15), we obtain 2 h H H h,H h h,H H h,H εkph,H − pH − pH − pH ε ε kQ ≤ sε (pε , pε − pε ) + ∆l (pε ε ) + sε (pε , pε ε ) H h,H h h,H = ∆sh (pH − pH ε , pε − pε ) + ∆l (pε ε ).

(4.16)

By use of (4.12) and (4.13) for ∆sh and ∆lh , we obtain (4.14) from (4.16). Combining Theorems 4.3 and 4.10, a full error estimate of finite element approximation is as follows. Corollary 4.11 Let pε and ph,H be the unique solutions of Problems 3.2 and 4.9 respectively. ε Then ph,H → pε in Q as h, H → 0. ε Moreover, if pε ∈ H 1 (Ω0 ), then there exists a constant c > 0, depending on kpε kH 1 (Ω0 ) but independent of h and H, such that εkph,H − pε kQ ≤ c H ε1/2 + c h2 (kpε kQ + kg1 kH 3/2 (Γ) + kg2 kH 1/2 (Γ) ). ε At last, we comment that when the solution set Z is nonempty, the convergence of ph,H ε to p∗ follows from the triangle inequality kph,H − p∗ kQ ≤ kph,H − pε kQ + kpε − p∗ kQ ε ε in conjunction with Theorem 3.6 and Corollary 4.11.

5

Numerical simulation

In this section, we present some numerical results based on our new formulation for the BLT problem. First, we introduce an iterative scheme for this formulation. Then we provide a detailed finite element discretization process of the iterative algorithm. Finally, we show numerical results from two examples.

5.1

An iterative algorithm for the BLT problem

Let S, l, Sε , and uD (q, g1), uD (q), u eD (g1 ), uN (q, g2), uN (q) and u eN (g2 ) be given in Sections 2 and 3. Define two operators AD and AN from Q to H 2 (Ω) by AD q = uD (q),

AN q = uN (q) ∀ q ∈ Q, 15

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

519

and view them as two operators from Q to L2 (Ω). Set A = AD − AN and denote by b = u eN (g2 ) − u eD (g1 ) ∈ L2 (Ω). Then for any q ∈ Q, A q − b = (AD − AN ) q − b = uD (q, g1) − uN (q, g2 ).

Denote by A∗D and A∗N the adjoint operators of AD and AN : (A∗D v, q)Q = (v, AD q)L2 (Ω) ,

(A∗N v, q)Q = (v, AN q)L2 (Ω)

∀ v ∈ L2 (Ω), q ∈ Q.

Then A∗ : L2 (Ω) → Q is such that A∗ = A∗D − A∗N . Consequently, for any p, q ∈ Q, sε (p, q) = (A p, A q)L2(Ω) + ε (p, q)Q = (A∗ A p, q)Q + ε (p, q)Q = ((A∗ A + εI) p, q)Q . Therefore, Sε = A∗ A + εI. Similarly, l = A∗ b comes from l(q) = (b, A q)L2(Ω) = (A∗ b, q)Q . For any q ∈ Q, denote by uDN (q) = uD (q, g1 ) − uN (q, g2), and by wD = wD (uDN (q)) ∈ V0 and wN = wN (uDN (q)) ∈ V the solutions of the adjoint variational problems a(v, wD ) = (uDN , v)L2 (Ω)

∀ v ∈ V0

(5.1)

a(v, wN ) = (uDN , v)L2 (Ω)

∀ v ∈ V,

(5.2)

and respectively. Then wD (uDN (q))|Ω0 = A∗D (uDN (q)) and wN (uDN (q))|Ω0 = A∗N (uDN (q)). Thus, A∗ (A q − b) = (A∗D − A∗N )(uDN (q)) = (wD (uDN (q)) − wN (uDN (q)))|Ω0 . Let Pad be the projection operator from Q onto Qad . Following [14, Chapter I, Remark 3.3], we consider an iterative scheme for solving (3.2). Algorithm 5.1 1 Choose p0 ∈ Qad , set k = 0. 2 For k = 0, 1, . . . , with pk ∈ Qad known, 2.1 solve (2.5) and (2.6) to get ukD = uD (pk , g1 ) and ukN = uN (pk , g2 ); 2.2 compute f k = ukD − ukN ;

k 2.3 solve (5.1) and (5.2) with uDN (q) replaced by f k to obtain wD = wD (f k ) and k wN = wN (f k ); k k 2.4 compute w k = wD − wN ;

2.5 pek+1 = Wρ (pk ) = pk − ρ(w k |Ω0 + ε pk );

2.6 project pek+1 onto admissible set Qad : pk+1 = Pad pek+1. 16

520

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

k Note that under our priori smoothness assumptions on the given data, wD ∈ H 2 (Ω), k wN ∈ H 2 (Ω), and thus w k ∈ H 2(Ω). Since H 2 (Ω) ֒→ C(Ω), w k |Ω0 is well defined. From (3.7) in the proof of [14, Chapter I, Theorem 3.1], for the iterates to converge, we should select those ρ which guarantees the operator W ρ = Wρ Pad over space Q is a strictly contractive mapping. By the contractivity of the projection operator Pad , we only need to show Wρ is a strictly contractive mapping. From (Sε q, q)Q ≥ εkqk2Q , we have, for any q1 , q2 ∈ Q,

kWρ (q1 ) − Wρ (q2 )k2Q = kq1 − q2 k2Q − 2 ρ(Sε (q1 − q2 ), q1 − q2 )Q + ρ2 kSε (q1 − q2 )k2Q ≤ (1 − 2 ρ ε + ρ2 kSε k2 )kq1 − q2 k2Q ,

(5.3)

where kSε pkQ |(Sε p, q)Q | |sε (p, q)| = sup sup = sup sup . p∈Q,p6=0 kpkQ p∈Q,p6=0 q∈Q,q6=0 kpkQ kqkQ p∈Q,p6=0 q∈Q,q6=0 kpkQ kqkQ

kSε k = sup

Thus, Wρ is a strict contraction mapping if 0 < ρ < 2ε/kSε k2 with Sε = A∗ A + εI. Moreover, the contraction factor (1 − 2 ρ ε + ρ2 kSε k2 ) in (5.3) attains its minimum at ρ = ε/kSε k2 . Next, we discuss a discrete analogue of Algorithm 5.1 for the discrete Problem 4.6. For convenience, we assume T0,H and Th are consistent, i.e., the triangulation T0,H is a restriction of N0 t the triangulation Th on Ω0 . Let {Ti }N i=1 and {Tik }k=1 be the elements in Ω and Ω0 , respectively, where Nt and N0 denote the number of elements in Ω and Ω0 respectively. Any q H ∈ QH with P 0 q H |Tik = qk , 1 ≤ k ≤ N0 , can be written as q H = N k=1 qk χk , where χk is the characteristic t function of the element Tik . Set q = (q1 , q2 , · · ·, qN0 ) , where (·)t stands for transposition of (·). As a result, we can define an isomorphism JQ : RN0 → QH through q H = JQ q. Let n be the number of nodes of the triangulation Th , and let ϕi (x) ∈ V h , 1 ≤ i ≤ n, be the node basis functions of the finite element space V h associated with grid nodes xi . Then, for the problems (4.5) and (4.6), the solutions uhD ∈ g1h + V0h and uhN ∈ V h can be Pn Pn h h expanded by uhD = i=1 uD,i ϕi and uN = i=1 uN,i ϕi , respectively, where uD,i = uD (xi ) and uN,i = uhN (xi ). Let I = {1, 2, · · ·, n}, Ib = {i ∈ I | xi ∈ Γ}, I0 = {1, 2, · · ·, N0 }, Ij = {k ∈ I0 | xj is a vetex of element Tik }, j ∈ I. Moreover, define Z A = (aji ), aji = D ∇ϕi ∇ϕj dx, i, j ∈ I, ΩZ M = (mji ), mji = µa ϕi ϕj dx, i, j ∈ I, Ω ( R ϕ dx, k ∈ Ij , Tik j F = (fjk ), fjk = j ∈ I, 0, k ∈ I0 \Ij , Z t z = (z1 , z2 , · · ·, zn ) , zj = g2 ϕj ds, Γ

K = A + M. 17

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

521

In what follows, we use the same symbol for a finite element function and its vector representation associated with the given finite element basis functions. Then ukD and ukN in Algorithm 5.1 can be calculated by K

ukD

k

=Fp ,

ukD,i

K ukN = F pk + z,

= g1 (xi ), i ∈ Ib , ukN =

n X

ukD

=

n X

ukD,iϕi ,

i=1

ukN,iϕi .

i=1

Similarly, if we define C = (cji ),

Z

cji =



ϕi ϕj dx, i, j ∈ I

then K

k wD

k

=Cf ,

k K wN = C f k,

k wD,i

= 0, i ∈ Ib ,

k wN =

n X

k wD

=

n X

k wD,i ϕi ,

i=1

k wN,i ϕi .

i=1

As for the realization of Step 2.6 in Algorithm 5.1, let Qad = {p ∈ Q | p ≥ 0 a.e. in Ω0 }. −1 H N0 0 Then q H ∈ QH ∈ RN | q ≥ 0}. Consequently, the projection operator + , {q ∈ R ad ⇔ JQ q −1 H H Pad has the form: Pad q = JQ (max{JQ q , 0}) for any q H ∈ QH . The stopping criterion is as follows. Assume the measurements on data are polluted by noise with noise level δ > 0. Then the stop criteria is kpk+1 − pk kQ ≤ µ δ

(5.4)

for some constant µ > 1. The value of µ affects the iterative times and the accuracy in the reconstructed solution.

5.2

Numerical examples

We reconstruct light source function based on Problem 4.6 by applying Algorithm 5.1. The computational results presented here are performed by using a MATLAB code in a Dell OPTIPLEX GX280 (32-bit-capable 3.00GHz Pentium 4 CPU, 256MB of RAM). In all tests, we reconstruct the source function solution for different arguments including regularization parameter ε, meshsize h, noise level δ and parameter µ. We note that all these parameters affect the accuracy of the approximate source function and the iterative number in Algorithm 5.1, etc. Many references, e.g. [5, 15, 23], can be consulted for a proper regularization parameter 18

522

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

0.8

0.8

0.6

0.6 y

1

y

1

0.4

0.4

0.2

0.2

0

0

0.2

0.4

0.6

0.8

0

1

0

x

0.2

0.4

0.6

0.8

1

x

Figure 1: Sample Delaunay triangulations ε in the presence of noise. In our examples, we take the regularization parameter ε = 10−7 . We use u + 2 D∂ν u = 0 on Γ, (5.5) as the boundary condition for the PDE (2.1), which is resulted from the condition that the experiments are implemented in a dark environment. Then we take g = −D∂ν u on Γ

(5.6)

for the measurement on the boundary. From (5.5) and (5.6), we have g1 = 2 g and g2 = −g. Let g, then g1 and g2 , be polluted by noise with level δ = 10%. The admissible set is taken to be Qad = {q ∈ Q | q ≥ 0 a.e. in Ω0 }. We use Delaunay elements for the triangulations {Th }h of the problem domain Ω and {T0,H }H of the permissible domain Ω0 , and assume they are consistent so that H = h. See Figure 1 for examples of Delaunay triangulations. Denote the reconstructed approximate light source and by E = kph,h by ph,h ε ε − pkQ /kpkQ the relative 2 error in L -norm. In the first example, we let the problem domain Ω=(0, 10) × (0, 10), and the absorption and reduced scattering coefficients µa and µ′s be constant in the whole domain with values 0.040 and 1.5 respectively. For our simulation, we take p ≡ 1 pW for the true light source in Ω∗ = {(7.5, 8.75) × (2.5, 3.75)} ∪ {(7.5, 8.75) × (6.25, 7.5)}, and solve the equation (2.1) with boundary condition (5.5) to get the state u by the FEM in a triangulation with a small meshsize. In this example, we choose h = 0.1213 with 91136 elements and 45889 nodes in Ω for a triangulation with a small meshsize. Then from (5.5) and (5.6), we set g = u/2 for the measurement g and thus g1 and g2 (with noise) are obtained. Take the permissible domain to be Ω0 = Ω∗ . 19

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

523

h h,h Figure 2: Left: ph,h ε − p in Ω0 ; right: u corresponding to pε

Table 1: Iterative number, computation time and relative error E for different ρ and for meshsize h = 0.3750 in the first example ρ 0.175 0.17 0.15 0.10 0.05 0.01 0.005 0.001 iter-num 161 39 9 4 3 2 1 1 cpu-time 30.61 s 7.95 s 2.19 s 1.05 s 1.03 s 0.84 s 0.66 s 0.66 s E 55.35% 26.25% 11.88% 9.10% 6.49% 14.08% 18.43% 19.69% We reconstruct source function solution ph,h for different parameter ρ and meshize h. ε Because the norm of the operator Sε is difficult to compute, it is not easy to give an upper bound for ρ, not to mention the best ρ. We can take the smallest value ρmax from those ρ which make the corresponding sequence {kpk − pk−1 k2 }k nondecreasing during the iteration as an upper bound for ρ. In this test, a value near 0.05 for ρ appears to be an advisable choice for a good reconstruction. As for the stopping criterion (5.4), we take µ = 5. We plot ph,h ε − p and the corresponding state of this reconstructed light source function for ρ = 0.05 and h = 0.3750 with 5696 elements and 2929 nodes in Figure 2. We show the effect of the parameter ρ on the iterative number, the computation time, and the accuracy of the regularized approximate source in Table 1. The dependence of these terms on the meshsize h is provided in Table 2. Table 2: Iterative number, computation time and relative error E for different meshsize and for ρ = 0.05 in the first example h 1.2646 0.6767 0.3750 0.2133 iter-num 2 3 3 4 cpu-time 0.16 s 0.39 s 1.03 s 5.86 s E 13.63% 10.91% 6.49% 3.89%

20

524

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

Figure 3: Left: p and ph,h in Ω0 ; right: corresponding state u and uh ε Table 3: Iterative number, computation time and relative error E meshsize h = 0.1336 in the second example ρ 1.1 1.0 0.5 0.1 0.05 0.01 iter-num 71 16 3 8 8 1 cpu-time 12.30 s 3.20 s 1.02 s 1.61 s 1.86 s 0.69 s E 10.30% 3.51% 0.53% 1.28% 3.27% 7.09%

for different ρ and for 0.001 1 0.69 s 7.22%

Inpour second example, let Ω be a circle located atporigin with radius 2. Let Ω1 = {(x, y) ∈ Ω | x2 + y 2 < 0.6}, Ω2 = {(x, y) ∈ Ω | 0.6 < x2 + y 2 < 2}, and let the absorption and reduced scattering parameters be 0.09 and 2.3 in Ω1 , and 0.10 and 1.8 in Ω2 . For the measurements g1 and g2 , we place a light source with formulation p = 1 − 10(x − 1.1)2 − 4 (y − 0.45)2 in the domain Ω∗ = {(x, y) ∈ Ω | (x − 1.1)2 /0.22 + (y − 0.45)2 /0.52 = 1} and take the restriction on the boundary of the corresponding approximate state function obtained by the FEM for meshsize h = 0.0431 with 80384 elements and 40465 nodes as 2 g. Again, Ω0 = Ω∗ , and we use µ = 4 in (5.4) for the stopping criterion. We plot the true light source density distribution p and an approximate one ph,h as well as their corresponding state ε in Figure 3 for ρ = 0.5 and h = 0.1336. Again we show the effect of the parameters ρ and h on the iterative number, the time our compute costs and the accuracy of the regularized approximate source in Tables 3 and 4.

21

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

525

Table 4: Iterative number, computation time and relative error E for different meshsize and for ρ = 0.50 in the second example h 0.4667 0.2387 0.1336 0.0758 iter-num 1 2 3 3 cpu-time 0.13 s 0.42 s 1.02 s 4.36 s E 2.16% 0.86% 0.53% 0.41%

References [1] S. R. Arridge, Optical tomography in medical imaging, Inverse Problems 15 (1999), R41–R93. [2] K. Atkinson and W. Han, Theoretical Numerical Analysis: A Functional Analysis Framework, second edition, Springer-Verlag, New York, Texts in Applied Mathematics, Volume 39, 2005. [3] S. C. Brenner and L. R. Scott, The Mathematical Theory of Finite Element Methods, third edition, Springer-Verlag, New York, 2008. [4] A. J. Chaudhari, et al, Hyperspectral and multispectral bioluminescence optical tomography for small animal imaging, Phys. Med Biol. 50 (2005), 5421–5441. [5] J. Cheng and M. Yamamoto, One new strategy for a priori choice of regularizing parameters in Tikhonov’s regularization, Inverse Problems 16 (2001), 31–38. [6] X.-L. Cheng, R.-F. Gong, and W. Han, A generalized mathematical framework for bioluminescence tomography, Computer Methods in Applied Mechanics and Engineering 197 (2008), 524–535. [7] X.-L. Cheng, R.-F. Gong, and W. Han, Numerical approximation of bioluminescence tomography based on a new formulation, Journal of Engineering Mathematics (2008), DOI: 10.1007/s10665-008-9246-y. [8] S. R. Cherry, In vivo molecular and genomic imaging: new challenges for imaging physics, Phys. Med. Biol. 49 (2004), 13–48. [9] P. G. Ciarlet, The Finite Element Method for Elliptic Problems, North-Holland, 1978. [10] W.-X. Cong, K. Durairaj, L.-V. Wang, and G. Wang, A Born-type approximation method for bioluminescence tomography, Med. Phys. 33 (2006), 679–686. [11] A.-X. Cong and G. Wang, Multispectral bioluminescence tomography: methodolgy and simulation, Int. J. Biomed. Imag. 2006, Article ID 57614, 7 pages.

22

526

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

[12] W.-X. Cong, et al, A practical reconstruction method for bioluminescence tomography, Opt. Express 13 (2005), 6756–6771. [13] L. C. Evans, Partial Differential Equations, American Mathematical Society, 1998. [14] R. Glowinski, Numerical methods for nonlinear variational problems, Springer-Verlag, 1983. [15] G. H. Golub and U. V. Matt, Tikhonov regularization for large scale problems, in Scientfic Computing, eds G. H. Golub, S. H. Lui, F. T. Luk and R. J. Plemmons, Springer, 1997, pp. 3–26. [16] W. Gong, R. Li, N. Yan and W. Zhao, An improved error analysis for finite element approximation of bioluminescence tomography, J. Comp. Math. 26 (2008), 297–309. [17] P. Grisvard, Elliptic Problems in Nonsmooth Domains, Pitman, 1985. [18] W. Han, W.-X. Cong, and G. Wang, Mathematical theory and numerical analysis of bioluminescence tomography, Inverse Problems 22 (2006), 1659–1675. [19] W. Han, W.-X. Cong, and G. Wang, Mathematical study and numerical simulation of multispectral bioluminescence tomography, International Journal of Biomedical Imaging 2006 (2006), Article ID 54390, 10 pages, doi:10.1155/IJBI/2006/54390. [20] W. Han, K. Kazmi, W.-X. Cong, and G. Wang, Bioluminescence tomography with optimized optical parameters, Inverse Problems 23 (2007), 1215–1228. [21] W. Han and G. Wang, Theoretical and numerical analysis on multispectral bioluminescence tomography, IMA Journal of Applied Mathematics 72 (2007), 67–85. [22] W. Han and G. Wang, Bioluminescence tomography: biomedical background, mathematical theory, and numerical approximation, Journal of Computational Mathematics 26 (2008), 324–335. [23] P. Hansen, The use of the L-curve in the regularization of discrete ill-posedproblems, SIAM J. Sci. Comput. 14 (1993), 1487–1503. [24] A. D. Klose, V. Ntziachristos, and A. H. Hielscher, The inverse source problem based on the radiative transfer equation in optical molecular imaging, J. Comput. Phys. 202 (2005), 323–345. [25] C. S. Levin, Primer on molecular imaging technology, European J. Nuclear Med. and Mol. Imag. 32 (2005), 325–345. [26] J.-L. Lions, Optimal Control of Systems Goverened by Partial Differential Equations, Springer, 1971.

23

GONG ET AL:...BIOLUMINESCENCE TOMOGRAPHY

527

[27] Y. J. Lv and et al, A multilevel adaptive finite element algorithm for bioluminescence tomography, Opt. Express 14 (2006), 8211–8223. [28] Y. J. Lv, Spectrally resolved bioluminescence tomography with adaptive finite element analysis: methodology and simulation, Phys. Med. Biol. 52 (2007), 4497–4512. [29] T. F. Massoud and S. S. Gambhir, Molecular imaging in living subjects: seeing fundamental biological processes in a new light, Genes Dev. 17 (2003), 545–580. [30] F. Natterer and F. W¨ ubbeling, Mathematical Methods in Image Reconstruction, SIAM, Philadelphia, 2001. [31] P. Ray, A. M. Wu, and S. S. Gambhir, Optical bioluminescence and bositron emission tomography imaging of a novel fusion reporter gene in tumor xenografts of living mice, Cancer Res. 63 (2003), 1160–1165. [32] T. Troy, D. J. Mcmullen, L. Sambucetti, and B. Rice, Quantitative comparison of the sensitivity of detection of fluorescent and bioluminscent reporters in animal models. Mol. Imag. 3 (2004), 9–23. [33] G. Wang, E. A. Hoffman, et al., Development of the first bioluminescent CT scanner, Radiology 229(P) (2003), 566. [34] G. Wang, Y. Li, and M. Jiang, Uniqueness theorems in bioluminescence tomography, Med. Phys. 31 (2004), 2289–2299. [35] Q. Zhang, L. Yin, Y. Tan, Z. Yuan, and H. Jiang, Quantitative bioluminescence tomography guided by diffuse optical tomography, Opt. Express 16 (2007), 1481–1486.

24

JOURNAL 528 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 528-539, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Existence and Uniqueness of the Solution for Degenerate Semilinear Parabolic Equations W. Y. Chan Department of Mathematics, Southeast Missouri State University, Cape Girardeau, MO 63701-6700, USA Email address: [email protected]

ABSTRACT For the problem given by xq ut

(x ux )x = up for 0 < x < a, 0 < t < T

and u (0; t) = 0 = u (a; t) for 0 < t < T , where q function for 0

x

0,

1, u (x; 0) = u0 (x) for 0

x

a,

2 [0; 1), p > 1, and u0 (x) is a nonnegative

a, this paper studies existence and uniqueness of the classical solution u of the problem.

Furthermore, the blow-up set of the solution is investigated. Key words: Degenerate parabolic problem; Comparison Theorem; Classical solution; Eigenfunction; Blow-up 1. INTRODUCTION Let T

1, q, , p, a be constants such that q

D = [0; a],

=D

[0; T ), @

= D

0,

f0g [ (f0; ag

2 [0; 1), p > 1, a > 0, D = (0; a), (0; T )), and Lu = xq ut

=D

(0; T ),

(x ux )x . The following

degenerate semilinear parabolic …rst initial-boundary value problem is studied, Lu = up in

;

(1)

u (x; 0) = u0 (x) on D, u (0; t) = 0 = u (a; t) for t 2 (0; T ) ;

(2)

where u0 (x) is a nonnegative function such that u0 (0) = 0 = u0 (a) and u0 (x) 2 C 2+

D for some

2 (0; 1). The study of the problem (1)-(2) is motivated by the research papers of Chen, Liu, and Xie [5], and Floater [7]. Chen, Liu, and Xie studied the blow-up set of the problem (1)-(2) with a nonlocal source Ra term 0 up dx. They showed that u blows up in a …nite time and the blow-up set is D. When = 0, Floater

studied the blow-up set of u if 1 < p

q + 1 and u0 (x) satis…es the condition

d dx

(u0 (x) =x)

0 in D.

He showed that if the solution of the problem (1)-(2) blows up in a …nite time, then it blows up only at x = 0. When p > q + 1, Chan and Liu [4] proved that x = 0 is not a blow-up point if u0 (x) satis…es the condition u000 + up0

Ku0 in D for some positive constant K. In addition, they showed that the blow-up set

is a compact subset of D. Without the source term up , the problem (1) can be used to illustrate the heat conduction in a rigid slab whose faces at x = 0 and x = a are in contact with a heat reservoir (cf. Day [6]). xq and x are the heat capacity and the thermal conductivity of the slab, respectively.

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

2

When

529

= 0 and q = 1, the problem (1) can describe the temperature u of the channel ‡ow of a ‡uid with

a temperature-dependent viscosity in the boundary layer (cf. Chan and Kong [3], and Ockendon [11]); here, x and t denote the coordinates perpendicular and parallel to the channel wall, respectively. When

= q,

(1) is transformed into ut The behavior of the operator Lu = ut

uxx

uxx

x

up :

ux = x

ux =x was studied by Alexiades [1], and Chan and Chen [2].

In Section 2, we study existence and uniqueness of the classical solution of the problem (1)-(2). In Section 3, we assume that u0 (x) satis…es the following condition, xu00

u0 < 0 in D:

(3)

Using an approach di¤erent from Floater, we show that u blows up only at x = 0 when 1 < p

q + 1.

2. EXISTENCE AND UNIQUENESS OF THE SOLUTION Firstly, we prove a comparison theorem. Lemma 1. For any v2C D

2 (0; T ) and bounded nonnegative function B (x; t) on D

[0; ] \ C 2;1 (D

(0; ]), and (L

u then u

v on D

f0g [ (f0; ag

(L

B) v in D

v on the parabolic boundary

D

(0; ] ;

f0g [ (f0; ag

(0; ]) ;

3+ )=2

v + " 1 + x(1

)=2

ect where " and c are positive real numbers. Then, w > 0 on

(0; ]). By a direct computation,

(L

As x ! 0, x(

B) u

[0; ].

Proof. Let w = u D

[0; ], if u and

h B) (u v) + (L B) " 1 + x(1 ( h i 1 ct "e 1 + x(1 )=2 (cxq B) +

)=2

B) w = (L

! 1. Let k1 = max(x;t)2D

[0; ]

2

1 2

x(

3+ )=2

i

2

x

)

:

B, and s1 denote the positive root of h

1 + x(1

)=2

B) w > 0 for (x; t) 2 (0; s1 ]

If s1 < a, we choose c

2 ( 3+ )=2

i

k1 = 0:

Then, (L

ect

k1 : sq1

(0; ] :

530

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

3

Therefore, (L Suppose that w

0 somewhere in D

B) w > 0 in D

(0; ] :

(4)

(0; ], then the set

ft 2 (0; ] : w (x0 ; t)

0 for some x0 2 Dg

is nonempty. Let t denote its in…mum. Since w (x; 0) > 0 on D, 0 < t such that w (x; t) = 0. We have wt (x; t) and wxx (x; t) (L

. Let x denote the smallest x 2 D

0. At t, w attains its minimum at x, it follows that wx (x; t) = 0

0. Therefore, at (x; t) B) w (x; t) = xq wt (x; t)

It contradicts (4). Hence, w > 0 on D Let (x) = x (a

x) where

x

1

[0; ]. As " ! 0+ , u

v

x wxx (x; t)

2 (0; 1) and

+

h0 (x)

wx (x; t)

B (x; t) w (x; t)

0 on D

0:

[0; ].

< 1, h0 be a positive constant such that u0 (x) on D;

(5)

~ be a positive constant less than a=2 such that there exists some t0 for which the initial value problem, ~

a

0

h (t) =

2 p

hp (t)

~q+2

for t 2 (0; t0 ] , h (0) = h0 ;

(6)

(t)

(7)

has a unique solution, and ~

p

a

~

p

hp

1

0 for t 2 (0; t0 ] ;

where = min Let

(x; t) =

(x) h (t), ! = D

construction, h0 (t) Lemma 2.

)~

(1

Proof. By (5),

2

~

a

(0; t0 ], ! = D

0 for t 2 (0; t0 ] and

(x; t)

+

;

(1

) a

[0; t0 ], and @! = D

+

~

~

2

:

f0g [ (f0; ag

(x; t) 2 C (!) \ C 2;1 (!).

u (x; t) on !.

(x; 0)

u0 (x) on D. Also,

(0; t) = 0 and

(a; t) = 0 for t > 0. A direct computation

gives p

L

= xq+ (a x

p

(a

= xq+ (a

x) h0 (t) x)

p

(a

p

h (t)

hp (t)

d h x dx

x) h0 (t) + (1

+ ( + 2 )x x

(0; t0 ]). By the

x)

+ p

1

(a

hp (t) :

x)

+

)x 1

1

+

h (t) + (1

(a

2

x)

(a )x

x

+

(a

x)

x) h (t) +

(a

x)

2

h (t)

1

i

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

4 i When (x; t) 2 0; ~

(0; t0 ], by (7) we have p

L

)~

(1 h (t) 0:

h Similarly, when (x; t) 2 a

h

~; a p

L

~

(1

0: ~

p

+

2

p

~

a

~

a hp

h (t) i 1 (t)

~

2

~

p

~

a

p

hp (t)

(0; t0 ],

h (t)

When (x; t) 2 ~; a

531

h

~

+

~

) a p

p

~

a

h (t)

hp

1

p

~

a

~ p hp (t)

i (t)

(0; t0 ], by (6) we obtain L

p

x) h0 (t)

xq+ (a

~q+2 h0 (t)

a

x

~

p

2 p

(a

x)

p

hp (t)

hp (t)

= 0: By Lemma 1,

(x; t)

u (x; t) on !.

To prove existence and uniqueness of the solution of the problem (1)-(2), let creasing function such that (x) = 0 when x such that

0 and (x) = 1 when x

< a=2. We also let D = ( ; a), ! = D

let @! = D

f0g [ (f ; ag

and

(0; t0 ], D = [ ; a], and ! = D

8 > > 0; > >
> > > : 1;

1 ;

[0; t0 ]. In addition,

8 > > 0; > >
> : 0;

0

x

1 u0 (x) ;

; 0, it follows from (8) u

u (x; t) where h i ~ = ~b1 ; ~b2 that u is a solution. For any (x1 ; t1 ) 2 !, there exist sets E !0

u exists for all (x; t) 2 !. Let u (x; t) = lim

2

~ such that (x1 ; t1 ) 2 E

^ E

t^1

^b1 > 0 and t~1

1

^ we have for any in E,

t0 ). Since u

constant q~ > 1, i. jju jjLq~(E^ ) ii. For

jj jjLq~(E^ )

> 0,

x tends to 0 when iii. jjx

k2 for some positive constant k2 ,

q p

u jjLq~(E^ )

If we choose q~ > 3= (2

q+

1

Lq~([^ b1 ;^ b2 ] (t;t+ ))

=

h ^bq~( 2

q+

1)+1

[~ q( q+

^bq~( 1

q+

1)+1 1=~ q

1) + 1]

i1=~q

1=~ q

approaches 0, ^b q jj 1

p

jjLq~(E^ ) .

), by Theorem 4.9.1 of Ladyµzenskaja, Solonnikov, and Ural0 ceva [10, pp. 341-342]

^ . By Theorem 2.3.3 there [10, p. 80], W 2;1 E ^ ,! H u 2 Wq~2;1 E q~

; =2

^ . Thus, jju jj E H

; =2

(E^ )

k3

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

6

533

for some positive constant k3 . Now, x

q p

u

H

; =2

(E^ )

^b q jj jjp + sup x 1 1 ^ (x;t)2E ^ (~ x;t)2E

+ sup ^ (x;t)2E ^ (~ x;t)2E

jup (~ x; t)j jx q jx x ~j

q

jup (x; t) up (~ x; t)j jx x ~j x ~

q

j

up (x; t)

q

x

+ sup

t~

t

^ (x;t)2E (x;t~)2E^

up x; t~ =2

:

By the mean value theorem, we have x

q p

u

H

; =2

(E^ )

^b q jj jjp + p^b q jj jjp 1 jju jj 1 1 1 1 H

p

; =2

(E^ ) + jj jj1 x

for some positive constant k4 which is independent of . In addition, jjx

q+

jjH

; =2

q H

; =2

(E^ ) and

k4

(E^ ) x

q+

1 H

; =2

(E^ )

are bounded. Then, by Theorem 4.10.1 of Ladyµzenskaja, Solonnikov, and Ural0 ceva [10, pp. 351-352], we have jju jjH 2+

;1+ =2

(E~ )

k5

for some positive constant k5 which is independent of . This implies that u , (u )t , (u )x , and (u )xx are ~ By the Ascoli-Arzela theorem, jjujj 2+ equicontinuous in E. H

;1+ =2

are the limits of the corresponding partial derivatives of u . Since u (0; t) = 0 = u (a; t) for t 2 [0; t0 ]. Thus, u 2 C (!) \ C 2+

;1+ =2

k5 , and the partial derivatives of u

(E~ ) u

0 on !, by the Sandwich theorem

((0; a]

[0; t0 ]). By Lemma 1, there exists

a unique nonnegative solution u to the problem (1)-(2). Let T = supft0 : the problem (1)-(2) has a unique nonnegative solution u 2 C (!)\C 2+

;1+ =2

((0; a]

[0; t0 ])g.

A proof similar to that of Theorem 2.5 of Floater [7] gives the following theorem. Theorem 4. The problem (1)-(2) has a unique nonnegative solution u2C If T < 1, then u is unbounded in

\ C 2+

;1+ =2

((0; a]

[0; T )) :

.

3. BLOW-UP OF THE SOLUTION Let

(x) be the fundamental eigenfunction of the following Sturm-Liouville eigenvalue problem, (x

where

0 0

) + xq = 0 in D,

(0) = 0 =

(a) ;

is its corresponding eigenvalue. From the result of Chen, Liu, and Xie [5], ! p q+2 1 2 2 2 J 1 x ; (x) = k6 x q+2 q+2

(11) > 0 and

534

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

where J(1

)=(q+2

)

7

(z) is the Bessel function of the …rst kind with order (1

) = (q + 2

), and

(x) is

positive in D for some positive constant k6 . We choose k6 such that maxx2D (x) = 1. Ra Let U (t) = 0 xq udx. We modify the proof of Theorem 8 of Kaplan [8] to obtain the following blow-up

result.

1=(p 1)

Lemma 5. If U (0) > ( apq ) Proof. Multiply

, then u blows up in a …nite time.

(x) on both sides of (1), we obtain xq ut = (x ux )x

+ up :

Integrate the above equation with respect to x from 0 to a, and by (2) and (11) we get Z

Z

a q

x u dx

0

a

0 0

(x

) udx +

0

t

Z

a

xq udx +

0

Z

Z

a

up

0 a

x a

xq u aq

0

q

dx p

dx:

By Jensen’s inequality, Z

Z

a q

x u dx

0

Z

a q

x

udx +

0

t

a

0

p

xq udx aq

:

This inequality is equivalent to d e tU dt

(p 1)t

e

apq

e tU

p

:

Integrate the above expression from 0 to t, it yields e t U (t)

p+1

U

p+1

1 h e apq

(0)

(1 p)t

i 1 ;

which implies U1 By the assumption, U 1

p

p

(t)

1 + U1 apq

p

(0)

1 apq

e

(p 1)t

:

(0) < 1= ( apq ). Thus, U (t) tends to in…nity in a …nite time which implies that

u (x; t) blows up in a …nite time. Let I = xux

u + "~xr um ;

where r, m, and "~ are positive real numbers such that: (a) r

q+2

(b) 1 < m < p,

,

(12)

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

8

535

(c) let "~ be a positive real number and "~

(r

min

q

2 + ) (r + + m) + r (m mar (2r q 2 + )

1)

;

+r 2

ma

p m (2r q

2+ )

;

(d) by (3), we also choose "~ satisfying xu00 Lemma 6. If p

u0 + "~xr um 0

0 for x 2 D:

q + 1 and u0 (x) satis…es (3), then I

0 on

.

Proof. Since limx!0 xux = 0, limx!0 I (x; t) = 0 for t 2 (0; T ). De…ne I (0; t) = limx!0 I (x; t). Since u (x; t)

0 in

and u (a; t) = 0, it follows that ux (a; t)

By the condition (d), I (x; 0)

0 for t 2 (0; T ). Then, I (a; t)

0 for t 2 [0; T ).

0 for x 2 D. Di¤erentiate (12) with respect to t, it gives ut + "~mxr um

It = xuxt

1

ut :

(13)

Similarly, Ix = xuxx + "~mxr um

1

ux + "~rxr

1 m

u :

(14)

By (12), this expression is equivalent to Ix = xuxx + "~mxr

1 m 1

u

I + "~ (m + r) xr

1 m

"~2 mx2r

u

1 2m 1

u

:

(15)

Di¤erentiate (14) with respect to x, it yields r m 1

r 1 m 1

Ixx = xuxxx + 1 + "~mx u +~ "m (m

1) xr um

2

uxx + 2~ "rmx

2

(ux ) + "~r (r

1) xr

u

ux

(16)

> ;

2 m

u :

According to (13), (14), and (16), we have xq It

9 > =

(x Ix )x

= x (xq uxt 2~ "rmx "~ mx

xq ut + "~mxr um

x uxxx )

+r 1 m 1

u

+r 1 m 1

u

ux

"~m (m

ux

"~ rx

1) x

1 q

x ut

+r m 2

u

"~mxr+ um

x uxx 2

(ux )

"~r (r

1) x

1

uxx

+r 2 m

u

x uxx

+r 2 m

u :

Di¤erentiate (1) with respect to x, we obtain xq uxt

x uxxx =

qxq

1

ut + 2 x

1

uxx + (

1) x

2

ux + pup

1

ux :

Using this expression and condition (b), we have xq It

(x Ix )x (q + 1) xq ut + "~mxr um

"~mxr+ um

1

uxx

1 q

x ut + (

"~ (2r + ) mx

1) x uxx + (

+r 1 m 1

u

ux

"~r (r

1

ux + pup

1 + )x

+r 2 m

1) x

u :

1

xux

536

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

From (1), xq ut = x uxx + x xq It

1

9

ux + up . Then,

(x Ix )x (q + 1) x uxx + x

1

1) x uxx + (

1) x

+(

+r 1 m 1

"~ (2r + ) mx

u

ux + up + "~mxr um 1

ux

ux + pup

"~r (r

1

1

x uxx + x "~mxr+ um

xux

1

1

ux + up

uxx

+r 2 m

1 + )x

u :

We simplify the above expression, and according to (12), it yields xq It

(x Ix )x (q + 2

+ pup

1

) x uxx

(q + 2

"~xr um )

(I + u

1

)x

(q + 1) up + "~mxr um+p

ux

+r 1 m 1

2~ "mrx

u

ux

"~r (r

1 + )x

1

+r 2 m

u :

By (15) and (12), the above inequality changes to xq It

(x Ix )x (q + 2 (q + 2

2

)x

"~pxr um+p By assumption p

1

)x

1

x It

+r 2 m 1

u

(x Ix )x + (q + 2 "~m (2r +

"~x

u

1,

u [(r

"~ (m + r) xr

I

1 m

u + "~2 mx2r

(q + 1) up + "~mxr um+p

(I + u

u

u2m

1

"~xr um )

"~r (r

1

1 2m 1

u

+ pup

1 + )x

1

I + pup

+r 2 m

u :

q

+2r 2 2m 1

u

+r 2 2m 1

Choose "~ such that "~ xq It

u

[(r

Ix

+r 2 m 1

u

2 + ) (r + (2r

q

I

(q + 2

+ m) + r (m

2+ )

"~ (p

2

)x

I + pup

1)]

m) xr um+p

1

9 > > > > > > > > 1 I = > > > > > > > > ;

:

um for m > 1. By condition (a), it yields

u [(r

"~x

q

1

)x

2) x

+2r 2 2m 1

+r 2 m

+ "~2 mx

q

+r 2 m

+~ "2 mx

"~x

u

"~xr um )

(I + u

2~ "mrx

1 m 1

q + 1, it gives q

When 0

"~mxr

Ix

2 + ) (r + (2r

[(r

q

q q

2 + ) (r +

2 + ) (r +

q

1)]

2+ )

(x Ix )x + (q + 2 "~m (2r +

+ m) + r (m

(17)

2) x

+ m) + r (m

+ m) + r (m )x

1

I

"~ar m (2r

1)] = [ar m (2r

Ix

+r 2 m 1

u

1)

(q + 2

)x

q

q

2 + )] :

2 + )], then (17) becomes 9 > = (18) > 2 I + pup 1 I: ;

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

10 When u > 1, u2m

1

< um+p "~2 mx

Choose "~ such that "~

1

for m < p. By condition (a), it gives

+2r 2 2m 1

u

"~xr um+p

1

m) = a

+r 2

(p "~2 mx

537

(2r +r 2

"~a

+2r 2 2m 1

u

q

m (2r

m (2r (2r

2+ )

q

q

m) xr um+p

"~ (p 2+ )

(p

1

m) :

2 + ) , we have

q

2+ )

m) xr um+p

"~ (p

1

0:

Therefore, (17) reduces to (18). Hence, if "~ satis…es condition (c), then (18) is true for u Let J = I

t

e

where

and

0.

are positive real numbers. Then, J (x; t) < 0 on @ , Jt = It

and Jx = Ix . From (18), we have xq Jt +

t

e

(x Jx )x + (q + 2

(q + 2 "~m (2r For any

q

2 (0; T ), let M = max(x;t)2D xq Jt

As x ! 0, x

2

t

q xq

2

[0; ]

+ pup

+r 2 m 1

u

1

Jx t

J+ e t

J+ e

:

u. By condition (a), we obtain 1

)x

Jx + (q + 2

+r 2 m 1

2 + )x

t

J+ e

2 + )x

(x Jx )x + (q + 2

+ "~m (2r e

)x

1

)x

u

(q + 2

2

)x

)x

2

pup

J

1

J

J + pM p

1

:

+ pM p

1

! 1. Let s2 denote the positive root of (q + 2

If s2 < a, we choose

)x

2

= 0:

such that >

pM p sq2

)x

1

1

:

Therefore, for (x; t) 2 0 > xq J t + "~m (2r Suppose that J

(x Jx )x + (q + 2 q

2 + )x

0 somewhere in

+r 2 m 1

u

Jx + (q + 2

)x

J:

, then the set

ft 2 (0; T ) : J (x2 ; t)

0 for some x2 2 Dg

2

J

pup

1

J

e t,

538

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

11

is nonempty. Let t denote its in…mum. Since J (x; 0) < 0 on D, 0 < t < T . Let x3 denote the smallest x 2 D such that J x3 ; t = 0. We have Jt x3 ; t Jx x3 ; t = 0 and Jxx x3 ; t

0. At t, J attains its maximum at x3 , it follows that

0. Therefore, at x3 ; t

0 > xq3 Jt x3 ; t + (q + 2 + "~m (2r

x3 Jxx x3 ; t + (q + 2 2

) x3

J x3 ; t

2 + ) x3 +r

q

2 ) x3 p 1

p u x3 ; t 2

m 1

u x3 ; t

1

Jx x3 ; t

J x3 ; t

J x3 ; t

0: It leads to a contradiction. Therefore, J < 0 on Theorem 7. If p

. As

! 0+ , I

0 on

.

q + 1 and u0 (x) satis…es (3), then x = 0 is the only blow-up point.

Proof. According to Lemma 6, xux x

u

"~xr um on

1

x

ux

2

. It implies "~xr

u

2 m

u :

It is equivalent to d x dx

1

"~xr+m

u

2

x

1

u

m

:

Let x4 be a positive real number in (0; a]. For x 2 (0; x4 ), we integrate the above expression from x to x4 , it gives x4 1 u (x4 ; t)

m+1

m

x 1

1

u (x; t)

m+1

"~

1 xr+m xr+m 4 r+m 1

1

:

Suppose that u blows up at x4 , then u (x4 ; t) ! 1 as t ! T . The left hand side of the above expression tends to a non-positive real number. However, the right hand side is a positive real number. It leads to a contradiction. Hence, x = 0 is the only blow-up point.

References [1] V. Alexiades, Generalized axially symmetric heat potentials and singular parabolic initial boundary value problems, Arch. Rational Mech. Anal., 79, 325-350 (1982). [2] C. Y. Chan and C. S. Chen, A numerical method for semilinear singular parabolic quenching problems, Quart. Appl. Math., 47, 45-57 (1989). [3] C. Y. Chan and P. C. Kong, Channel ‡ow of a viscous ‡uid in the boundary layer, Quart. Appl. Math., 55, 51-56 (1997).

CHAN: ...SEMILINEAR PARABOLIC EQUATIONS

12

539

[4] C. Y. Chan and H. T. Liu, Global existence of solutions for degenerate semilinear parabolic problems, Nonlinear Anal., 34, 617-628 (1998). [5] Y. Chen, Q. Liu and C. Xie, Blow-up for degenerate parabolic equations with nonlocal source, Proc. Amer. Math. Soc., 132, 135-145 (2004). [6] W. A. Day, Parabolic equations and thermodynamics, Quart. Appl. Math., 50, 523-533 (1992). [7] M. S. Floater, Blow-up at the boundary for degenerate semilinear parabolic equations, Arch. Rational Mech. Anal., 114, 57-77 (1991). [8] S. Kaplan, On the growth of solutions of quasi-linear parabolic equations, Commun. Pure Appl. Math., 16, 305-330 (1963). [9] G. S. Ladde, V. Lakshmikantham and A. S. Vatsala, Monotone Iterative Techniques for Nonlinear Di¤ erential Equations, Pitman, Boston, Massachusetts, 1985, p. 143. [10] O. A. Ladyµzenskaja, V. A. Solonnikov and N. N. Ural0 ceva, Linear and Quasilinear Equations of Parabolic Type, Amer. Math. Soc., Providence, Rhode Island, 1968, pp. 80, 341-342, and 351-352. [11] H. Ockendon, Channel ‡ow with temperature-dependent viscosity and internal viscous dissipation, J. Fluid Mech., 93, 737-746 (1979).

JOURNAL 540 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.3, 540-554, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Generalized Shannon Sampling Method reduces the Gibbs Overshoot in the Approximation of a Step Function Yufang Hao† , Achim Kempf† ‡ †

Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada ‡

Department of Physics, University of Queensland, St Lucia, Queensland, 4072, Australia [email protected]

Abstract As is well-known, the Gibbs’ overshoot in approximating a step function is irreducible when using conventional Shannon sampling. Here, we consider a generalization of Shannon sampling which allows samples to be taken on non-equidistant points, adapted to the behavior of the function. We show, numerically, that the new method allows one to reduce the Gibbs’ overshoot. In a concrete example, the amplitude of the overshoot is reduced by 70%. We study the underlying mathematical structure with a view to eventually determining the ultimate bound on how far the Gibbs’ overshoot can be reduced. Key Words: Sampling, Shannon, Gibbs, Self-Adjoint Operators, Step Function.

1

Introduction

The Shannon sampling theorem was introduced into information theory by Shannon in 1949 [1]. It has since played a crucial role as the link between continuous and

1

HAO-KEMPF:SAMPLING METHOD-GIBBS OVERSHOOT

541

discrete representations of information, finding ubiquitous use in communication engineering and signal processing. Already before Shannon, the theorem was studied by E. Whittaker and J. Whittaker in 1929, and it was also independently studied in the Russian literature by Kotel’nikov in 1933. Hence, it is also called the theorem of Whittaker-Shannon-Kotel’nikov (WSK). For a review, see [2, 3, 4]. The Shannon sampling theorem states that if a function φ(t) is Ω-bandlimited, i.e., φ(t) has a frequency upper bound Ω, then φ(t) can be perfectly reconstructed at all time t from its sample values {φ(tn )}n taken on a set of sampling points {tn }n with an equidistant spacing tn+1 − tn = 1/(2Ω) via: φ(t) =

∞ X

` ´ G t, tn φ(tn )

(1)

n=−∞

The function G(t, tn ) is the so-called reconstruction kernel, and in the case of Shan` ´ non, it is the shifted sinc function sinc 2Ω(t − tn ) ‘centered’ at tn . The frequency upper bound Ω is called the bandwidth, and the sampling rate 1/(2Ω) is referred to as the Nyquist rate. In addition to its use for the perfect reconstruction of functions in the space of Ω-bandlimited functions, the Shannon sampling theorem has also been widely used to approximate non-bandlimited functions. However, in this case, the Gibbs’ phenomenon occurs whenever there is a discontinuous jump point leading, for example, to the Gibbs ringing problems in image compression. The clearest example to illustrate this type of overshoot is the step function H(t). See Figure 1. 8 > > 1 t>0 > > < H(t) = 0 t=0 > > > > : −1 t < 0 In Figure 1, the step function H(t) is approximated using Shannon sampling, i.e., as a sum of shifted sinc functions. The equidistant sampling points are at integer multiples of the constant spacing ∆s = 1/(2Ω). Although the sampling density on the right (∆s = 0.1) is ten times tighter than the one on the left (∆s = 1.0), the maximum values of both approximating functions are about 1.0664 with an error of 0.001. We used 1000 terms in (1) in both cases. As is well known, the 6.64% difference to the original step function H(t), i.e. the Gibbs’ overshoot, can not be further reduced when using Shannon sampling, even when increasing the bandwidth.

2

542

HAO-KEMPF:SAMPLING METHOD-GIBBS OVERSHOOT

Figure 1: Approximations of the step function by Shannon sampling. The left panel uses a wider sampling spacing of 1.0, while the right panel uses 0.1.

In this paper, we use a generalized sampling theory [5, 6, 7] which allows the sampling and reconstruction of a function on a set of non-equidisant points adapted to the behavior of the function. We show that the new sampling method displays advantages when approximating a step function. See Figure 2. In Figure 2, we approximated the step function using the generalized sampling method. Outside the interval [−10, 10], the sampling points have the same constant spacing ∆s = 1.0 as the ones on the left in Figure 1. But in the neighborhood interval [−10, 10] of the jump point, we adjusted the sampling density with 20 extra sampling points. As a result, the maximum value is reduced to 1.0193, which is a 70.9% of reduction in the amplitude of Gibbs’ type of overshoot. The amplitude is subject to a numeric error of 0.001, which implies an error of 0.1% in the reduction percentage. The plot on the right in Figure 2 is a zoom-in of the plot on the left near the jump point. The solid line on the top is the amplitude of the Gibb’s oveshoot on the uniform lattice in the case of Shannon (which is 1.0664), and the dashed line indicates the amplitude of maximum value with the new generalized sampling theorem (which is 1.0193). This indicates that the new method could be very useful, for example, to reduce Gibbs ringing in image compression. While we have observed the Gibbs overshoot reduction numerically, the analytic reasons and the ultimate limit for the Gibbs overshoot reduction still need to be understood. In the rest of this paper, we will therefore first recapitulate main features of the

3

HAO-KEMPF:SAMPLING METHOD-GIBBS OVERSHOOT

543

Figure 2: Approximating the step function by the generalized sampling method with nonequidistant sampling points. The right plot zooms in near to the jump point.

Shannon sampling theorem in Section 2, followed by the generalization in Section 3 where we will show that it preserves the key features of Shannon sampling, except for the equidistance restriction on sampling points. In the end, we will show how to choose the non-equidistant sampling lattice adapted to the behaviour of a step function, and to obtain the result in Figure 2 by the generalized sampling method.

2

The Shannon Sampling Theorem

In this paper, we call a set of points where we take samples a sampling lattice. The Shannon sampling theorem does not specify where we should start to take samples, but requires that the distance between two adjacent points in one sampling lattice is precisely the constant Nyquist spacing 1/(2Ω). Hence, we can parametrize all the sampling lattices as: tn (θ) =

n+θ , 2Ω

0≤θ 0, 0 < α − β ≤ 1,

(2.8)

1 (b − a) 2

b

∫ a

b

f α (t ) dt ∫ f

−β

(t ) dt ≥

a

α − β α −β ⎛ a + b ⎞ f ⎜ ⎟ α −β ⎝ 2 ⎠

1 2

+

f α − β (a) + f α − β (b) . α −β 4

α

In particular 1 (b − a) 2

(2.9)

b

∫ a

b

f β (t ) dt ∫ f a

Proof.

1 (b − a) 2

b

∫ a

b

f α (t ) dt ∫ f a

−β

(t ) dt ≥

−β

⎛ (t ) dt ≥ 1 + β ln⎜ ⎜ ⎝

1 (b − a) 2

b b

⎛ α

∫ ∫ ⎜⎜⎝ α − β a a

⎞ ⎟. f ((a + b) / 2 ) ⎟⎠ f (a ) f (b)

f α − β ( x) −

β

⎞ f α − β (t ) ⎟⎟ dx dt α −β ⎠

SULAIMAN: CONCAVE FUNCTIONS

599

b

1 f α − β (t ) dt ∫ (b − a) a

= =

α

b



α −β

f α − β (t ) dt −

a

β

b

α −β

∫f

α −β

(t ) dt

a

⎛ 1 α − β ⎛ a + b ⎞ f α − β (a) + f α − β (b) ⎞ ⎜ f ⎟⎟ ≥ ⎜ ⎟+ α − β ⎜⎝ 2 4 ⎝ 2 ⎠ ⎠ β ⎛a+b⎞ − f α −β ⎜ ⎟ α −β ⎝ 2 ⎠ 1 α − β α −β ⎛ a + b ⎞ α f α − β (a) + f α − β (b) + =2 f . ⎟ ⎜ α −β 4 ⎝ 2 ⎠ α −β

α

Inequality (2.8) can be written as

1 (b − a) 2

b

∫ a

⎛a+b⎞ f α −β ⎜ ⎟ −1 2 ⎝ ⎠ α −β f (t ) dt ∫ f (t ) dt ≥ 1 + ( 12 α − β ) α −β a b

+ By letting α → β , the above implies 1 (b − a) 2

b

∫ a

b

f β (t ) dt ∫ f

−β

α ⎛ f (a) f α − β (b) − 1 ⎞ ⎜ ⎟. + 4 ⎜⎝ α − β α − β ⎟⎠

f (a) f (b)

(t ) dt ≥ 1 + β ln

f ((a + b) / 2)

a

.

Other kinds are also obtained Theorem 2.6. If f : [a, b] → ℜ + is continuous and concave, then for all real numbers α ≥ 0, β > 0, α + β = 1, (2.10)

1 (b − a ) 2

b

∫f a

α

b

(t ) dt ∫ f

−β

a

⎛a+b⎞ (t ) dt ≤ α f ⎜ ⎟ ⎝ 2 ⎠ ⎛1 ⎛a+b⎞ f + β ⎜⎜ f −1 ⎜ ⎟+ ⎝ 2 ⎠ ⎝2

Proof. By virtue of Remark 2, b b 1 1 α f (t ) dt ∫ f − β (t ) dt = 2 ∫ (b − a) 2 (b − a) a a

b b

∫∫ f

α

( x) f

−β

−1

(a) + f 4

(t ) dx dt

1 (b − a) 2

⎛ α β α +β ∫a ∫a ⎜⎜⎝ α + β f ( x) + α + β f

=

1 (b − a ) 2

∫ ∫ (α f ( x) + β f

b b

a a

(b) ⎞ ⎟⎟ . ⎠

a a



b b

−1

−1

)

(t ) dx dt

− (α + β )

⎞ (t ) ⎟⎟ dx dt ⎠

600

SULAIMAN: CONCAVE FUNCTIONS

(

b

=

1 α f ( x) dx + β f −1 (t ) dt b − a ∫a

)

⎛ 1 −1 ⎛ a + b ⎞ f ⎛a+b⎞ ≤α f ⎜ ⎟+ ⎟ + β ⎜⎜ f ⎜ ⎝ 2 ⎠ ⎝ 2 ⎠ ⎝2

−1

(a) + f 4

−1

(b) ⎞ ⎟⎟ . ⎠

Theorem 2.7. If f : [a, b] → ℜ + is continuous and concave, then for all real numbers α , β , s, t , p, q, p ≤ 1 / α , q ≥ 1 / β , s < 1 / α , t > −1 / β , 1p + 1q = 1, 1s + 1t = 1, p > 1, 0 < s < 1,

(2.11)

1 ⎛ αs ⎛ a + b ⎞ f αs (a) + f αs (b) ⎞ 1 ⎛ ⎜f ⎜ ⎟⎟ + ⎜⎜ f ⎟+ 2s ⎜⎝ 2 ⎝ 2 ⎠ ⎠ 2t ⎝ 1 ≤ (b − a ) 2

Proof. We have b b 1 α f ( x) dx ∫ f (b − a ) 2 ∫a a

b

∫f

α

a

b

( x) dx ∫ f



( x) dx ≤

1 ⎛ ⎜f + 2q ⎜⎝ −β

⎛a+b⎞ f ⎜ ⎟+ ⎝ 2 ⎠

a

1 (u ) du ≤ (b − a) 2

− βq

b b

1 αp ⎛ a + b ⎞ 1 ⎛ f ⎜ ⎟ + ⎜⎜ f p ⎝ 2 ⎠ 2q ⎝

− βq

⎛1

αp

( x) +

a a

− βq

− βt

(a) + f 2

− βt

(b) ⎞ ⎟⎟ ⎠

1 αp ⎛ a + b ⎞ f ⎜ ⎟ p ⎝ 2 ⎠

⎛a+b⎞ f ⎜ ⎟+ ⎝ 2 ⎠

∫ ∫ ⎜⎜⎝ p f

1 ⎛ 1 αp 1 ⎜⎜ f ( x) dx + f ∫ b−a a⎝ p q b

=

−β

− βt

− βq

1 f q

(a) + f 2 − βq

− βq

(b) ⎞ ⎟⎟ . ⎠

⎞ (u ) ⎟⎟ dx du ⎠

⎞ (u ) du ⎟⎟ ⎠

⎛a+b⎞ f ⎜ ⎟+ ⎝ 2 ⎠

− βq

(a) + f 2

− βq

(b) ⎞ ⎟⎟ . ⎠

Also

1 (b − a) 2

b

∫f a

α

b

( x) dx ∫ f a

−β

1 − βt ⎞ ⎛ 1 αs ⎜ f ( x) + f (u ) ⎟ dx du t ⎝s ⎠ 1 ⎛ 1 αs 1 − βt ⎞ = ⎜ f ( x) dx + f (u ) du ⎟ b−a⎝s t ⎠

(u ) du ≥



1 (b − a) 2

1 ⎛ αs ⎛ a + b ⎞ f αs (a) + f αs (b) ⎞ ⎜f ⎜ ⎟⎟ ⎟+ 2 s ⎜⎝ 2 ⎝ 2 ⎠ ⎠

+

1⎛ ⎜f 2t ⎜⎝

− βt

⎛a+b⎞ f ⎜ ⎟+ ⎝ 2 ⎠

− βt

(a) + f 2

− βt

(b) ⎞ ⎟⎟ . ⎠

Theorem 2.8. If f : [a, b] → ℜ + is continuous and concave, then for all real numbers α ≥ 0, β > 0, α + β ≥ 1,

SULAIMAN: CONCAVE FUNCTIONS

(2.12)

1 (b − a) 2

b



f

b

( x) dx ∫ f

−α

a

−β

601

(t ) dt

a

1 α ⎛ a + b ⎞ α + 2β f f −α − β ⎜ ⎟+ 2α +β ⎝ 2 ⎠ α+β Proof. Let g = f −1 . Then g is convex, and hence ≤

−α − β

(a) + f 4

−α − β

(b)

b b ⎞ β 1 ⎛ α 1 α ⎜⎜ g ( x) dx ∫ g β (t ) dt ≤ g α + β ( x) dx + g α + β (t ) dt ⎟⎟ 2 ∫ α +β (b − a) ⎝ α + β (b − a ) a ⎠ a

= ≤

α + 2β − β 1 b α + β g (t ) dt α + β b − a ∫a

α + 2β ⎛ 1 α + β ⎛ a + b ⎞ g α + β (a) + g α + β (b) ⎞ ⎜ g ⎜ ⎟⎟ ⎟+ α + β ⎜⎝ 2 4 ⎝ 2 ⎠ ⎠ β

⎛a+b⎞ g α +β ⎜ ⎟ α +β ⎝ 2 ⎠ α +β α +β 1 α ⎛ a + b ⎞ α + 2β g (a ) + g (b) = g α +β ⎜ ⎟+ 2α +β 4 ⎝ 2 ⎠ α +β −

Therefore b 1 f (b − a ) 2 ∫a

−α

b

( x) dx ∫ f

−β

(t ) dt

a

1 α ≤ f 2α +β

−α − β

⎛ a + b ⎞ α + 2β f ⎜ ⎟+ ⎝ 2 ⎠ α+β

−α − β

(a) + f 4

−α − β

(b )

References [1]. C. E. M. Pearce and J. E. Pecaric, On some inequalities of Brenner and Alzer for concave functions, J. Math. Anal. Appl. 198 (1996), 282-288.

.

JOURNAL 602 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 602-615, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

RECURRENCE RELATION WITH BINOMIAL COEFFICIENT GEORGE GROSSMAN DEPARTMENT OF MATHEMATICS CENTRAL MICHIGAN UNIVERSITY MT. PLEASANT, MI 48859 [email protected] PHONE: 989-774-5577 FAX: 989-774-2414 AKLILU ZELEKE DEPARTMENT OF MATHEMATICS LYMAN BRIGGS COLLEGE EAST LANSING, MI 48825 [email protected] XINYUN ZHU DEPARTMENT OF MATHEMATICS UNIVERSITY OF TEXAS OF THE PERMIAN BASIN ODESSA, TX 79762 ZHU [email protected] RUNNING HEAD: RECURRENCE RELATION WITH BINOMIAL COEFFICIENT

1

603

2

GROSSMAN, ZELEKE AND ZHU

Abstract We derive a linear, nonhomogeneous, recurrence relation having two indices of recurrence, with initial conditions and equated to a binomial coefficient. We relate a known classical combinatorial identity to partial sums of geometric series and show that a certain corresponding sum always is a rational, non-integer term. We construct solutions which are rational expressions with an indeterminate form evaluated in a limit as a binomial coefficient. We establish several interesting combinatorial identities and use these to express sums of powers of integers (in particular, squares and cubes) as a finite sum of binomial coefficients with integer coefficients. Keywords: recurrence relation, binomial coefficient, characteristic polynomial.

1. Introduction Different counting procedures and various arrangements of mathematical objects lend themselves to combinatorial identities. Moreover, combinatorial identities [11] arise in numerous settings in mathematics and commonly involve polynomials, binomial coefficients and recurrence relations and the current paper combines all of these features. For example, if 𝑆𝑝 (𝑛) = 1+2𝑝 +⋅ ⋅ ⋅+𝑛𝑝 , integers 𝑝 > 0, 𝑛 ≥ 0, then it can be shown [9],(sect. 4 contains result) that 𝑆𝑝 (𝑛) satisfies a recurrence relation involving binomial coefficients, term 1/(𝑝 + 1) and 𝑆𝑖 (𝑛), 0 ≤ 𝑖 < 𝑛, 𝑆0 (𝑛) = 𝑛. Additionally, computer-generated proofs [12] also generate identities as a by-product of the proof-process. In a sense, the present paper offers an algorithmic way of producing at the least a sequence of combinatorial identities whose importance or significance lies in originality. In one case an elegant result [10] (see sect. 2) was generalized. The basic idea lies in the characteristic polynomial of the 𝑗-th order Fibonacci sequence given by 𝐹𝑗 (𝑥) = 𝑥𝑗 − 𝑥𝑗−1 − ... − 𝑥 − 1. It is known that the positive zeros of 𝐹𝑗 (𝑥) are of the form 2 − 𝑂(2−𝑗 ) [1],[2]. It has also been shown [8] that the single negative zero of 𝐹𝑗 (𝑥) has the form −1 + 𝑂(𝑗 −1 ) for 𝑗 even and tends to −1 monotonically as 𝑗 → ∞. By factoring these zeros one gets, 𝐹𝑗 (𝑥) = (𝑥 − 2 + 𝜀𝑗 )(𝑥 + 1 − 𝛿𝑗 )(𝑥𝑗−2 + 𝑎𝑗−3 𝑥𝑗−3 + ⋅ ⋅ ⋅ + 𝑎1 𝑥 + 𝑎0 ), where 𝛿𝑗 and 𝜀𝑗 are positive, decreasing sequences for 𝑗 = 4, 6, . . .. In [7] an explicit form for the coefficients 𝑎𝑖 was found by solving a non-homogeneous linear recurrence relation of the form −𝑎𝑛 + 𝑏 𝑎𝑛+1 + 𝑐 𝑎𝑛+2 = 1 where 𝑏 = 𝜀 − 1 − 𝛿, 𝑐 = (1 − 𝛿)(2 − 𝜀). As a byproduct of this solution, several combinatorial identities were formulated by solving by a standard method and comparing solutions; computer-generated and combinatorial proofs were also given to some identities, [4], [5], [6]. A recent paper, [13] has studied the asymptotics of the zeros of the positive and negative zeros of the derivatives and indefinite integrals of 𝐹𝑗 as 𝑗 → ∞ and shown this behavior is monotone for each derivative and integral. Moreover, one can write for

604

RECURRENCE RELATION WITH BINOMIAL COEFFICIENT

3

sufficiently large 𝑗 and by factoring that (𝑘) 𝐹𝑗

(1.1)

𝑗−(𝑘+2)

= (𝑥 − 2 + 𝜖𝑗 )(𝑥 + 1 − 𝛿𝑗 )



(𝑘)

𝑎 𝑖 𝑥𝑖 .

𝑖=0

(𝑘)

Here 𝐹𝑗 is the 𝑘 𝑡ℎ derivative of 𝐹𝑗 . Analogous to the case 𝑘 = 0 which corresponds to 𝐹𝑗 ; in the present paper we derive recurrence relations of the form ( ) 𝑛+𝑘+2 (1.2) −𝑎𝑘,𝑛 − 𝑏𝑎𝑘,𝑛+1 + 𝑐𝑎𝑘,𝑛+2 = , 𝑘, 𝑛 ≥ 0, 𝑛+2 with initial conditions. The proof of this result is by induction over 𝑘 and is given in section 2. In sect. 3 we discuss the special case 𝑘 = 1 and 𝑐 = 𝑏 + 1 which we call a singular solution and leads to an indeterminate form for 𝑏 = −2. A result in [10] is shown to follow easily from an interesting identity. In sect. 4 we apply these ideas to finding closed forms for sums of powers of integers. More specifically, we show how one can find a closed form that contains sums of binomial coefficients with integer coefficients for the 𝑛 𝑛 ∑ ∑ 2 following sums, 𝑆2 (𝑛) = 𝑖 and 𝑆3 (𝑛) = 𝑖3 . 𝑖=1

𝑖=1

2. Recurrence Relation and Identities In the present section we extend some results in [3]. Theorem 2.1. Define real numbers 𝑎𝑘,𝑛 , 𝑏, 𝑐 such that k,n are nonnegative integers subject to initial conditions 𝑎0,0 = 1/𝑐, 𝑎0,1 = 1/𝑐 + 𝑏/𝑐2 .

(2.1) and 𝑎0,𝑗 for 𝑗 ≥ 2 by, (2.2)

−𝑎0,𝑛 − 𝑏𝑎0,𝑛+1 + 𝑐𝑎0,𝑛+2 = 1, 𝑛 ≥ 0,

and (2.3)

𝑎𝑘+1,𝑛 =

𝑛 ∑

𝑎𝑘,𝑖 .

𝑖=0

Then we have (2.4)

−𝑎𝑘,𝑛 − 𝑏𝑎𝑘,𝑛+1 + 𝑐𝑎𝑘,𝑛+2 =

(

) 𝑛+𝑘+2 , 𝑘, 𝑛 ≥ 0, 𝑛+2

such that RHS of (2.4) comprise binomial coefficients in 𝑛, 𝑘 relate the levels of the recurrence in 𝑘.

605

4

GROSSMAN, ZELEKE AND ZHU

Proof. We prove inductively on 𝑘. Substituting (2.3) into LHS (2.4) with 𝑘 = 1 and employing initial conditions yields, − 𝑎1,𝑛 − 𝑏𝑎1,𝑛+1 + 𝑐𝑎1,𝑛+2 = −

𝑛 ∑

𝑎0,2𝑖 − 𝑏

𝑖=0

𝑛+1 ∑

𝑎0,𝑖 + 𝑐

𝑖=0

𝑛+2 ∑

𝑎0,𝑖

𝑖=0

= − 𝑎0,0 − 𝑏𝑎0,1 + 𝑐𝑎0,2 − 𝑎0,1 − 𝑏𝑎0,2 + 𝑐𝑎0,3 − . . . − 𝑎0,𝑛 − 𝑏𝑎0,𝑛+1 + 𝑐𝑎0,𝑛+2 − 𝑏𝑎0,0 + 𝑐𝑎0,0 + 𝑐𝑎0,1 ( ) ( ) 1 1 1 𝑏 𝑛+3 = 𝑛+1−𝑏⋅ +𝑐⋅ +𝑐⋅ + =𝑛+3= . 𝑐 𝑐 𝑐 𝑐2 𝑛+2 Thus, (2.4) is shown for 𝑘 = 1. Next we note that, Lemma 2.1.

1 , 𝑐 𝑏 𝑘+1 (2.6) + 2 , 𝑘 ≥ 0. 𝑎𝑘,1 = 𝑐 𝑐 Proof. Both results follow form the initial conditions and by repeatedly applying (2.3), the former with 𝑛 = 0 and the latter with 𝑛 = 1. □

(2.5)

𝑎𝑘,0 =

We get, by applying the previous lemma and (2.3) to (2.4) with 𝑘 = 𝑗 + 1, the following − 𝑎𝑗+1,𝑛 − 𝑏𝑎𝑗+1,𝑛+1 + 𝑐𝑎𝑗+1,𝑛+2 𝑛 𝑛+1 𝑛+2 ∑ ∑ ∑ = − 𝑎𝑗,𝑖 − 𝑏 𝑎𝑗,𝑖 + 𝑐 𝑎𝑗,𝑖 𝑖=0

𝑖=0

𝑖=0

= − 𝑎𝑗,0 − 𝑏𝑎𝑗,1 + 𝑐𝑎𝑗,2 − 𝑎𝑗,1 − 𝑏𝑎𝑗,2 + 𝑐𝑎𝑗,3 − . . . − 𝑎𝑗,𝑛 − 𝑏𝑎𝑗,𝑛+1 + 𝑐𝑎𝑗,𝑛+2 − 𝑏𝑎𝑗,0 + 𝑐𝑎𝑗,0 + 𝑐𝑎𝑗,1 ) ( ) ( ) ( ) 𝑗+2 𝑗+3 𝑛+𝑗+3 𝑏 𝑏 𝑐 𝑗+1 = + + ... + − + +𝑐⋅ 2 + 2 3 𝑛+3 𝑐 𝑐 𝑐 𝑐 ( ) ( ) ( ) ( ) 𝑗+1 𝑗+2 𝑗+3 𝑛+𝑗+2 = 1+ + + + ... + 1 2 3 𝑛+2 ( ) 𝑛+𝑗+3 = , 𝑛+2 (

by a standard (ice or field hockey stick) result in binomial coefficients.



Remark. The use of (2.3) to derive (2.4) is not obvious, but found by trial and error, nevertheless, it is somewhat close to the concept of superposition of families of solutions in linear differential equations. It is also advantageous to write (2.4) in odd and even cases

606

RECURRENCE RELATION WITH BINOMIAL COEFFICIENT

5

Corollary 2.1. ) 2𝑛 + 𝑘 + 2 −𝑎𝑘,2𝑛 − 𝑏𝑎𝑘,2𝑛+1 + 𝑐𝑎𝑘,2𝑛+2 = , 2𝑛 + 2 ( ) 2𝑛 + 𝑘 + 3 −𝑎𝑘,2𝑛+1 − 𝑏𝑎𝑘,2𝑛+2 + 𝑐𝑎𝑘,2𝑛+3 = . 2𝑛 + 3 (

(2.7) (2.8)

The following solutions [7] to (2.7,2.8) are given by (2.9) (2.10)

) 𝑛 𝑛+1+𝑘 ( 1 ∑ ∑ 𝑛 + 1 + 𝑘 𝑏𝑖 + 𝑛+2 , 𝑐 𝑐𝑘 𝑖 𝑘=0 𝑖=2𝑘+1 ( ) ) 𝑛−1 𝑛+1+𝑘 ( 𝑛+1 ) 1 1 − ( 1+𝑏 1 ∑ ∑ 𝑛 + 1 + 𝑘 𝑏𝑖 𝑐 = + 𝑛+2 . 𝑘 𝑐 𝑐 𝑖 𝑐 1 − 1+𝑏 𝑐 𝑘=0 𝑖=2𝑘+2

1 = 𝑐

𝑎0,2𝑛+1 𝑎0,2𝑛

(

1 − ( 1+𝑏 )𝑛+1 𝑐 1 − 1+𝑏 𝑐

)

Subtracting (2.9,2.10) gives Corollary 2.2. (2.11)

𝑎0,2𝑛+1

) 𝑛 ( 1 ∑ 𝑛 + 1 + 𝑘 𝑏2𝑘+1 = 𝑎0,2𝑛 + 𝑛+2 . 𝑐 2𝑘 + 1 𝑐𝑘 𝑘=0

Moreover, let 𝑏 = 1 − 𝑐 in 2.11 and employing a result in [7] gives Corollary 2.3. (2.12)

) 𝑛 ( 1 ∑ 𝑛 + 1 + 𝑘 (1 − 𝑐)2𝑘+1 1/𝑐2(𝑛+1) − 1 1 1/𝑐2(𝑛+1) − 1 = = . 𝑐𝑛+2 𝑘=0 2𝑘 + 1 𝑐𝑘 1+𝑐 𝑐 1 + 1/𝑐

which LHS can be computed by Maple software and RHS is related to partial sum of geometric series. Setting 𝑐 = −1 in cor. 2.3 and employing L’Hˆopital’s rule produces a result in [10] given by ( ) 𝑛 ∑ 𝑛−𝑘 2𝑘 𝑛 + 𝑘 + 1 (−1) 2 = 𝑛 + 1. 2𝑘 + 1 𝑘=0 The derivation in [10] involved rational expressions and equating coefficients of infinite series. A few elegant results follow from 2.12. By setting 𝑐 = 1/𝑝, 𝑝 > 1 )𝑘 ( ) ( ) 𝑛 ( 𝑝2 − 1 ∑ (𝑝 − 1)2 𝑛+𝑘+1 1 𝑛 , 𝑛 ≥ 1. ≈ 𝑝 + 𝑂 𝑝2 𝑘=0 𝑝 2𝑘 + 1 𝑝𝑛 and 2

(𝑝 − 1)𝑝

𝑛

)𝑘 ( ) 𝑛 ( ∑ (𝑝 − 1)2 𝑛+𝑘+1 𝑘=0

and we see

𝑝

2𝑘 + 1

+ 1 = 𝑝2(𝑛+1) ,

607

6

GROSSMAN, ZELEKE AND ZHU

Corollary 2.4. If 𝑝 is positive integer > 1 and 𝑛 ≥ 1 then )𝑘 ( ) 𝑛 ( ∑ (𝑝 − 1)2 𝑛+𝑘+1 𝑝

𝑘=0

2𝑘 + 1

is a positive, non-integer rational number. Proof. We have )𝑘 ( ) 𝑛 ( ∑ (𝑝 − 1)2 𝑛+𝑘+1 𝑝

𝑘=0

2𝑘 + 1

=

1 2𝑛 (𝑝 + 𝑝2(𝑛−1) + . . . + 𝑝2 + 1), 𝑛 𝑝

and the result follows by considering the remainder 𝑚𝑜𝑑 𝑝.



One can also easily use Maple software to help compute the following: if 𝑝 ∕= 0, 1 Corollary 2.5. )𝑘 ( ) 𝑛 ( ∑ (𝑝 − 1)2 𝑛+𝑘+1 𝑘=0

𝑝

2𝑘

=

1 𝑝𝑛+1

(

) 𝑝2𝑛+3 + 1 2(𝑛+1) − (𝑝 − 1) , 𝑝+1

(

) 1 − 𝑝2𝑛+4 2(𝑛+1) − (𝑝 − 1) . 1 − 𝑝2

and also Corollary 2.6. )𝑘 ( ) 𝑛 ( ∑ (𝑝 − 1)2 𝑛+𝑘+2 𝑘=0

𝑝

2𝑘 + 1

=

1 𝑝𝑛+1

It is noted that the possible elegance results from exponent 2 in the (𝑝 − 1)2 term in the sums. 3. Singular solution: k=1 In the this section we consider (2.4) with 𝑘 = 1 so that, (3.1)

−𝑎1,𝑛 − 𝑏𝑎1,𝑛+1 + 𝑐𝑎1,𝑛+2 = 𝑛 + 3, 𝑛 ≥ 0,

It is well-known that the standard solution for 𝑛, both odd and even, has the following (real) form, 𝑎1,𝑛 = 𝐴 + 𝐵𝑛 + 𝐶𝑛2 + 𝑐1 𝛼𝑛 + 𝑐2 𝛽 𝑛 ,

(3.2)

for some undetermined coefficients 𝐴, 𝐵, 𝐶, roots 𝛼, 𝛽 of the characteristic equation, and some constants 𝑐1 , 𝑐2 , that depend on the initial conditions. It is noted in (2.9, 2.10), the case 𝑐 = 𝑏 + 1 leads to an indeterminate form. The case, 𝑐 ∕= 𝑏 + 1, is considered in [7]. We have (3.3)

−𝑎1,𝑛 − 𝑏𝑎1,𝑛+1 + (𝑏 + 1) ⋅ 𝑎1,𝑛+2 = 𝑛 + 3, 𝑛 ≥ 0,

with initial conditions (2.1). The basic elementary idea is to exploit (2.4) with the solution (2.9, 2.10), and compare to the newly found solutions to (3.3) for odd and even cases. We solve (3.3) for solution 𝑎 := 𝑎1,𝑛 according to 𝑎 = 𝑎𝑐 + 𝑎𝑝 such that, −𝑎𝑐 − 𝑏𝑎𝑐 + (𝑏 + 1) ⋅ 𝑎𝑐 = 0, −𝑎𝑝 − 𝑏𝑎𝑝 + (𝑏 + 1)𝑎𝑝 = 𝑛 + 3. We have the following,

608

RECURRENCE RELATION WITH BINOMIAL COEFFICIENT

Lemma 3.1. (3.4)

𝑎 := 𝑎1,𝑛 = 𝑐1 + 𝑐2

7

(−1)𝑛 + 𝐵𝑛 + 𝐶𝑛2 , (1 + 𝑏)𝑛

where, 3𝑏 + 8 , 2(𝑏 + 2)2 𝑏2 + 5𝑏 + 7 𝑐1 = , (𝑏 + 2)3

(3.5)

𝐵=

(3.6)

1 , 2(𝑏 + 2) 1 𝑐2 = . (𝑏 + 1) ⋅ (𝑏 + 2)3 𝐶=

Proof. For 𝑎𝑐 : The characteristic equation is given by: 1 . 1+𝑏 We next compute 𝐵, 𝐶 by substituting 𝑎𝑝 = 𝐵𝑛 + 𝐶𝑛2 into (3.3) which yields the following equation, (𝑏 + 1) ⋅ 𝛼2 − 𝑏 ⋅ 𝛼 − 1 = 0 ⇐⇒ 𝛼 = 1, −

− 𝐶 ⋅ 𝑛2 − 𝐵 ⋅ 𝑛 − 𝑏 ⋅ (𝐶 ⋅ (𝑛 + 1)2 + 𝐵 ⋅ (𝑛 + 1)) − 𝑏 ⋅ (𝐶 ⋅ (𝑛 + 2)2 + 𝐵 ⋅ (𝑛 + 2)) = 𝑛 + 3, which, after simplification gives (3.7)

𝑛 ⋅ 𝐶 ⋅ (2𝑏 + 4) + 𝐶 ⋅ (3𝑏 + 4) + 𝐵 ⋅ (𝑏 + 2) = 𝑛 + 3.

We obtain from (3.7), the pair of equations: (3.8)

𝐶 ⋅ (2𝑏 + 4) = 1, 𝐶 ⋅ (3𝑏 + 4) + 𝐵 ⋅ (𝑏 + 2) = 3.

Solving (3.8) for 𝐵, 𝐶 yields (3.5). To solve for 𝑐1 , 𝑐2 in (3.4) use (3.5) with 𝑛 = 0, 1 and (2.6) with 𝑘 = 1, 𝑐 = 𝑏 + 1, to get the pair of equations 1 , 𝑐1 + 𝑐2 = 𝑏+1 𝑐2 1 3𝑏 + 8 3𝑏 + 2 𝑐1 − + + = , 2 𝑏 + 1 2(𝑏 + 2) 2(𝑏 + 2) (𝑏 + 1)2 with solution (3.6).



A result in [7], Corollary 3.1. (3.9)

(3.10)

𝑏+3 1 2𝑛 + 1 − + (𝑏 + 2)2 (1 + 𝑏)2(𝑛+1) (𝑏 + 2)2 𝑏+2 ( 𝑛 𝑛+1+𝑘 ∑ ∑ 𝑛 + 1 + 𝑘 ) 𝑏𝑖 1 = 𝑛+1+ , (1 + 𝑏)𝑛+2 𝑘=0 𝑖=2𝑘+1 𝑖 (1 + 𝑏)𝑘

𝑎0,2𝑛+1 =

𝑏+3 1 2𝑛 + + 2 2(𝑛+1) 2 (𝑏 + 2) (1 + 𝑏) (𝑏 + 2) 𝑏+2 ( 𝑛−1 𝑛+1+𝑘 ∑ ∑ 𝑛 + 1 + 𝑘 ) 𝑏𝑖 1 = 𝑛+1+ . (1 + 𝑏)𝑛+2 𝑘=0 𝑖=2𝑘+2 𝑖 (1 + 𝑏)𝑘

𝑎0,2𝑛 =

609

8

GROSSMAN, ZELEKE AND ZHU

We have analogous result for cor. 3.1. From 2.3 we have (3.11)

𝑎𝑘+1,2𝑛 =

2𝑛 ∑

𝑎𝑘,𝑖 , 𝑎𝑘+1,2𝑛+1 =

𝑖=0

2𝑛+1 ∑

𝑎𝑘,𝑖 .

𝑖=0

We employ (3.11) in RHS of (3.9, 3.10) and use (3.4-3.6) to get after simplification, Corollary 3.2. 1 (2𝑛 + 1)2 (2𝑛 + 1)(3𝑏 + 8) 𝑏2 + 5𝑏 + 7 − + + (𝑏 + 2)3 (1 + 𝑏)2(𝑛+1) (𝑏 + 2)3 2(𝑏 + 2) 2(𝑏 + 2)2 ) 𝑛 𝑖−1 𝑖+1+𝑘 ( (𝑛 + 1)(𝑛 + 2) ∑ ∑ ∑ 𝑖 + 1 + 𝑘 𝑏𝑗 = + 𝑏+1 𝑗 (1 + 𝑏)𝑖+2+𝑘 𝑖=1 𝑘=0 𝑗=2𝑘+2

𝑎1,2𝑛+1 = (3.12)

𝑛 ∑ 𝑖 𝑖+1+𝑘 ∑ ∑ (𝑖 + 1 + 𝑘 ) 𝑏𝑗 + , (1 + 𝑏)𝑖+2+𝑘 𝑗 𝑖=0 𝑘=0 𝑗=2𝑘+1

𝑏2 + 5𝑏 + 7 1 2𝑛2 𝑛(3𝑏 + 8) + + + 3 2𝑛+1 3 (𝑏 + 2) (1 + 𝑏) (𝑏 + 2) 𝑏+2 (𝑏 + 2)2 ) 𝑛 𝑖−1 𝑖+1+𝑘 ( 𝑏𝑗 (𝑛 + 1)2 ∑ ∑ ∑ 𝑖 + 1 + 𝑘 = + 𝑗 𝑏+1 (1 + 𝑏)𝑖+2+𝑘 𝑖=1 𝑘=0 𝑗=2𝑘+2

𝑎1,2𝑛 = (3.13)

𝑛−1 ∑ 𝑖 𝑖+1+𝑘 ∑ (𝑖 + 1 + 𝑘 ) ∑ 𝑏𝑗 + . 𝑗 (1 + 𝑏)𝑖+2+𝑘 𝑖=0 𝑘=0 𝑗=2𝑘+1

One can then apply the similar procedure for 𝑘 = 1, 2, . . . thereby producing the claimed sequence of identities. The combinatorial aspect of these identities is illustrated in the next section. 4. 𝑆2 (𝑛), 𝑆3 (𝑛) and binomial coefficients The interesting case in (3.12, 3.13) is 𝑏 → −2. By continuity of RHS the limit exists and we note that binomial coefficients satisfy the following well-known equation

(4.1) (4.2)

) ( ) ( ) ( ) 2𝑛 + 2 + 𝑘 2𝑛 + 3 + 𝑘 2𝑛 + 4 + 𝑘 2𝑛 + 2 + 𝑘 −2 + = , 2𝑛 2𝑛 + 1 2𝑛 + 2 2𝑛 + 2 ( ) ( ) ( ) ( ) 2𝑛 + 3 + 𝑘 2𝑛 + 4 + 𝑘 2𝑛 + 5 + 𝑘 2𝑛 + 3 + 𝑘 −2 + = . 2𝑛 + 1 2𝑛 + 2 2𝑛 + 3 2𝑛 + 3

(

By applying (4.1, 4.2) to (3.12, 3.13) we get that

610

RECURRENCE RELATION WITH BINOMIAL COEFFICIENT

9

Corollary 4.1. lim𝑏→−2 = lim𝑏→−2 =

𝑏2 + 5𝑏 + 7 1 (2𝑛 + 1)2 (2𝑛 + 1)(3𝑏 + 8) − + + (𝑏 + 2)3 (1 + 𝑏)2(𝑛+1) (𝑏 + 2)3 2(𝑏 + 2) 2(𝑏 + 2)2 ) ( 2𝑛 + 4 , − 2𝑛 + 1 𝑏2 + 5𝑏 + 7 1 2𝑛2 𝑛(3𝑏 + 8) + + + 3 2𝑛+1 3 (𝑏 + 2) (1 + 𝑏) (𝑏 + 2) 𝑏+2 (𝑏 + 2)2 ( ) 2𝑛 + 3 − . 2𝑛

Applying cor. 4.1 to (3.12, 3.13) we obtain Corollary 4.2. ( ) 𝑛 ∑ 𝑖−1 𝑖+1+𝑘 ∑ ∑ (𝑖 + 1 + 𝑘 ) 2𝑛 + 4 (4.3) = (𝑛 + 1)(𝑛 + 2) + 2𝑗 (−1)1+𝑖+𝑗+𝑘 2𝑛 + 1 𝑗 𝑖=1 𝑘=0 𝑗=2𝑘+2 𝑛 ∑ 𝑖 𝑖+1+𝑘 ∑ (𝑖 + 1 + 𝑘 ) ∑ 2𝑗 (−1)1+𝑖+𝑗+𝑘 , + 𝑗 𝑖=0 𝑘=0 𝑗=2𝑘+1

(4.4)

(

) 𝑛 ∑ 𝑖−1 𝑖+1+𝑘 ∑ (𝑖 + 1 + 𝑘 ) ∑ 2𝑛 + 3 2 2𝑗 (−1)1+𝑖+𝑗+𝑘 = (𝑛 + 1) + 𝑗 2𝑛 𝑖=1 𝑘=0 𝑗=2𝑘+2 𝑛−1 ∑ 𝑖 𝑖+1+𝑘 ∑ (𝑖 + 1 + 𝑘 ) ∑ 2𝑗 (−1)1+𝑖+𝑗+𝑘 . + 𝑗 𝑖=0 𝑘=0 𝑗=2𝑘+1

∑ 𝑝 We next show how it is possible to express sums of powers of integers of the form 𝑖, where 𝑝 = 2, 3 as a sum of binomial coefficients. To see how this is done we have from [7] 𝑛−1 𝑛+1+𝑘 ∑ (𝑛 + 1 + 𝑘 ) ∑ 𝑛+1 (4.5) 𝑛(𝑛 + 1) = (−1) 2𝑖−1 (−1)𝑖+𝑘 𝑖 𝑘=0 𝑖=2𝑘+2 (4.6)

2

𝑛+1

(𝑛 + 1) = (−1)

𝑛 𝑛+1+𝑘 ∑ ∑ (𝑛 + 1 + 𝑘 ) 𝑘=0 𝑖=2𝑘+1

𝑖

2𝑖−1 (−1)𝑖+𝑘 , 𝑛 ≥ 0.

Employing (4.5, 4.6) in (4.3, 4.4) gives ( ) 𝑛 𝑛 ∑ ∑ 2𝑛 + 4 (4.7) = (𝑛 + 1)(𝑛 + 2) + 2 ⋅ 𝑖 ⋅ (𝑖 + 1) + 2 ⋅ (𝑖 + 1)2 , 2𝑛 + 1 𝑖=1 𝑖=0 ( ) 𝑛 𝑛−1 ∑ ∑ 2𝑛 + 3 (4.8) = (𝑛 + 1)2 + 2 ⋅ 𝑖 ⋅ (𝑖 + 1) + 2 ⋅ (𝑖 + 1)2 . 2𝑛 𝑖=1 𝑖=0

611

10

GROSSMAN, ZELEKE AND ZHU

Simplification of (4.7,4.8) leads to, ) 𝑛 ∑ 2𝑛 + 4 = (2𝑖 + 2)2 , 2𝑛 + 1 𝑖=0 ( ) 𝑛 ∑ 2𝑛 + 3 = (2𝑖 + 1)2 . 2𝑛 𝑖=0 (

(4.9) (4.10)

Summing (4.9, 4.10) yields, ( (

) ( ) 2𝑛+2 ∑ 2𝑛 + 4 2𝑛 + 3 + = 𝑖2 , 2𝑛 + 1 2𝑛 𝑖=1

) ( ) 2𝑛+1 ∑ 2𝑛 + 2 2𝑛 + 3 + = 𝑖2 . 2𝑛 − 1 2𝑛 𝑖=1

We also note the well-known formula 𝑛 ∑

(4.11)

𝑖=1

( ) 𝑛 1 2𝑛 + 2 𝑖 = ⋅ (𝑛 + 1) ⋅ (2𝑛 + 1) = , 6 4 2𝑛 − 1 2

which is implied by (4.9). Moreover, these ideas can be extended by applying (3.11,4.1,4.2) to corollary 3.2. We obtain, setting 𝑏 = −2 that (

( 𝑛 ) ) 𝑛−1 ∑ ∑ 2𝑛 + 4 = −𝑎2,2𝑛 = − 𝑎1,2𝑖 + 𝑎1,2𝑖+1 2𝑛 𝑖=0 𝑖=0

(4.12)

=

𝑛 ∑

2

(𝑖 + 1) −

𝑖=0



𝑖=1

𝑛 ∑ 𝑖−1 ∑ 𝑖=1

𝑛 ∑ 𝑖 ∑

𝑙−1 𝑙+1+𝑘 ∑ ∑ (𝑙 + 1 + 𝑘 ) (−1) ⋅ 2 (−1)𝑗+𝑘 2𝑗−1 𝑗 𝑙=1 𝑘=0 𝑗=2𝑘+2 𝑙

𝑙 𝑙+1+𝑘 ∑ ∑ (𝑙 + 1 + 𝑘 ) (−1) ⋅ 2 (−1)𝑗+𝑘 2𝑗−1 𝑗 𝑙=0 𝑘=0 𝑗=2𝑘+1

+

𝑛−1 ∑



𝑛−1 ∑ 𝑖 ∑

𝑙

(𝑖 + 1)(𝑖 + 2) −

𝑖=0

𝑖=0 𝑙=0

𝑛−1 ∑ 𝑖 ∑ 𝑖=1

𝑙

(−1) ⋅ 2

𝑙−1 𝑙+1+𝑘 ∑ ∑ (𝑙 + 1 + 𝑘 ) (−1) ⋅ 2 (−1)𝑗+𝑘 2𝑗−1 𝑗 𝑙=1 𝑘=0 𝑗=2𝑘+2

𝑙 𝑙+1+𝑘 ∑ ∑

𝑘=0 𝑗=2𝑘+1

(

𝑙

) 𝑙+1+𝑘 (−1)𝑗+𝑘 2𝑗−1 , 𝑗

612

RECURRENCE RELATION WITH BINOMIAL COEFFICIENT

(

11

( 𝑛 ) ) 𝑛 ∑ ∑ 2𝑛 + 5 = −𝑎2,2𝑛+1 = − 𝑎1,2𝑖 + 𝑎1,2𝑖+1 2𝑛 + 1 𝑖=0 𝑖=0

(4.13)

=

𝑛 ∑

2

(𝑖 + 1) −

𝑖=0

− +

𝑖=1

𝑛 ∑ 𝑖−1 ∑ 𝑖=1

𝑛 ∑

𝑛 ∑ 𝑖 ∑

𝑙−1 𝑙+1+𝑘 ∑ ∑ (𝑙 + 1 + 𝑘 ) (−1) ⋅ 2 (−1)𝑗+𝑘 2𝑗−1 𝑗 𝑙=1 𝑘=0 𝑗=2𝑘+2

𝑙 𝑙+1+𝑘 ∑ ∑ (𝑙 + 1 + 𝑘 ) (−1) ⋅ 2 (−1)𝑗+𝑘 2𝑗−1 𝑗 𝑙=0 𝑘=0 𝑗=2𝑘+1 𝑙

(𝑖 + 1)(𝑖 + 2) −

𝑖=0



𝑛 ∑ 𝑖 ∑ 𝑖=1

𝑛 ∑ 𝑖 ∑ 𝑖=0

𝑙

𝑙−1 𝑙+1+𝑘 ∑ ∑ (𝑙 + 1 + 𝑘 ) (−1) ⋅ 2 (−1)𝑗+𝑘 2𝑗−1 𝑗 𝑙=1 𝑘=0 𝑗=2𝑘+2 𝑙

𝑙 𝑙+1+𝑘 ∑ ∑ (𝑙 + 1 + 𝑘 ) (−1) ⋅ 2 (−1)𝑗+𝑘 2𝑗−1 . 𝑗 𝑙=0 𝑘=0 𝑗=2𝑘+1 𝑙

We have by summation, from (4.5,4.6),

(4.14)

𝑖 ∑ 𝑙=1

(4.15)

𝑙−1 𝑙+1+𝑘 𝑖 ∑ (𝑙 + 1 + 𝑘 ) ∑ ∑ 𝑙+1 2𝑗−1 (−1)𝑗+𝑘 , (−1) ⋅ 𝑙 ⋅ (𝑙 + 1) = 𝑗 𝑘=0 𝑗=2𝑘+2 𝑙=1

𝑖 ∑

2

(𝑙 + 1) =

𝑖 ∑ 𝑙=1

𝑙=1

𝑙+1

(−1)

𝑙+1+𝑘 𝑙 ∑ (𝑙 + 1 + 𝑘 ) ∑ 2𝑗−1 (−1)𝑗+𝑘 . 𝑗 𝑘=0 𝑗=2𝑘+1

Employing (4.14, 4.15) in (4.12, 4.13) we find that,

( (4.16)

) 𝑛 ∑ 𝑖 𝑛 ∑ 𝑖−1 𝑛 ∑ ∑ ∑ 2𝑛 + 4 2 = 2 ⋅ 𝑙 ⋅ (𝑙 + 1) + 2 ⋅ (𝑙 + 1) + (𝑖 + 1)2 2𝑛 𝑖=1 𝑙=1 𝑖=1 𝑙=0 𝑖=0 +

𝑛−1 ∑ 𝑖 ∑

2 ⋅ 𝑙 ⋅ (𝑙 + 1) +

𝑖=1 𝑙=1

( (4.17)

𝑛−1 ∑ 𝑖 ∑

2

2 ⋅ (𝑙 + 1) +

𝑖=0 𝑙=0

𝑛−1 ∑

(𝑖 + 1) ⋅ (𝑖 + 2),

𝑖=0

) 𝑛 ∑ 𝑖 𝑛 ∑ 𝑖−1 𝑛 ∑ ∑ ∑ 2𝑛 + 5 = 2 ⋅ 𝑙 ⋅ (𝑙 + 1) + 2 ⋅ (𝑙 + 1)2 + (𝑖 + 1)2 2𝑛 + 1 𝑖=1 𝑙=1 𝑖=1 𝑙=0 𝑖=0 +

𝑛 ∑ 𝑖 ∑ 𝑖=1 𝑙=1

2 ⋅ 𝑙 ⋅ (𝑙 + 1) +

𝑛 ∑ 𝑖 ∑ 𝑖=0 𝑙=0

2

2 ⋅ (𝑙 + 1) +

𝑛 ∑ 𝑖=0

(𝑖 + 1) ⋅ (𝑖 + 2).

613

12

GROSSMAN, ZELEKE AND ZHU

We consider (4.16,4.17) and find that (

2𝑛 + 4 2𝑛

)

= 4⋅

𝑛−1 ∑ 𝑖 ∑

𝑙 ⋅ (𝑙 + 1) + 4 ⋅

𝑖=1 𝑙=1

(4.18)

+

𝑛 ∑

2𝑛 + 5 2𝑛 + 1

)

= 4⋅

(𝑖 + 1)2 + 3

𝑛−1 ∑

(𝑖 + 1)(𝑖 + 2),

𝑖=0

𝑛 ∑ 𝑖 ∑

𝑙 ⋅ (𝑙 + 1) + 4 ⋅

𝑖=1 𝑙=1

(4.19)

+ 3⋅

𝑛 ∑ 𝑖=0

⋅(𝑙 + 1)2

𝑖=1 𝑙=0

𝑖=0

(

𝑛 ∑ 𝑖−1 ∑

𝑛 ∑ 𝑖−1 ∑

⋅(𝑙 + 1)2

𝑖=1 𝑙=0

𝑛 ∑ (𝑖 + 1)2 + (𝑖 + 1)(𝑖 + 2). 𝑖=0

We simplify (4.18) employing (4.11) to get (

(4.20)

) 𝑛 𝑛−1 ∑ 𝑖−1 𝑛−1 ∑ 𝑖 ∑ ∑ ∑ 2𝑛 + 4 2 (8 ⋅ 𝑙2 + 3 ⋅ 𝑙) + (𝑛 + 1)2 𝑙+ 𝑙 +4⋅ = 8⋅ 2𝑛 𝑖=1 𝑙=1 𝑖=1 𝑙=1 𝑙=1 ( ) ( )] ( ) 𝑛 [ ∑ 2𝑖 + 2 𝑖+1 𝑛+2 = 2⋅ +4⋅ + 2𝑖 − 1 2 2 𝑖=1 ) 𝑛 ( 1 ∑ 4𝑖 + 4 = ⋅ . 4 𝑖=0 3

We find, after straightforward simplification of (4.20) that, 𝑛 ∑

) ( ) ( ) ( ) 2𝑛 + 4 2𝑛 + 2 𝑛+1 𝑛+1 9 (4.21) (2𝑖) = 3 ⋅ − ⋅ − 13 ⋅ −3⋅ . 4 2 3 2 1 𝑖=1 (

3

Using the fact that (

) ( ) ( ) 2𝑛 + 4 2𝑛 + 4 2𝑛 + 5 + = 2𝑛 2𝑛 + 1 2𝑛 + 1

and (4.9) we find that 𝑛 [ ∑ 1( 𝑖=1

] ( ) ) 31 14 2𝑛 + 5 2 8 ⋅ 𝑖 + 12 ⋅ 𝑖 + 6 ⋅ 𝑖 + 1 + 6 ⋅ 𝑖 + ⋅𝑖+ +5= , 3 3 3 2𝑛 + 1 3

2

so that 𝑛 ∑

3

(2𝑖 + 1)

𝑖=0

(4.22)

( ) ( ) ( ) ) 2𝑛 + 2 𝑛+1 𝑛+1 2𝑛 + 5 9 − 31 ⋅ − 14 ⋅ , 𝑛 ≥ 1. = 3⋅ − ⋅ 3 2 1 2 4 (

614

RECURRENCE RELATION WITH BINOMIAL COEFFICIENT

13

Summing (4.21,4.22) gives ( ) ( ) ( ) ( ) ( ) 2𝑛+1 ∑ 2𝑛 + 5 2𝑛 + 4 2𝑛 + 2 𝑛+1 𝑛+1 3 𝑖 =3⋅ +3⋅ −9⋅ − 44 ⋅ − 17 ⋅ 4 4 3 2 1 𝑖=1 ( ) ( ) ( ) ( ) ( ) ( ) 2𝑛 + 5 2𝑛 + 4 2𝑛 + 2 𝑛+1 𝑛 𝑛−1 =3⋅ +3⋅ −9⋅ − 44 ⋅ − 17 ⋅ − 17 ⋅ . 4 4 3 2 1 0 (4.23) We next use the fact that (4.24)

[( ) ( )] ( ) ( ) 2𝑛 + 2 2𝑛 + 3 2(𝑛 + 1) 2𝑛 + 1 (2𝑛 + 1) = 3 + −3 + . 3 3 2 1 3

Thus, for evenly many terms from (4.23, 4.24) ( ) ( ) ( ) ( ) ( ) 2𝑛 ∑ 2𝑛 + 5 2𝑛 + 4 2𝑛 + 2 𝑛+1 𝑛+1 3 𝑖 =3⋅ +3⋅ −9⋅ − 44 ⋅ − 17 ⋅ 4 4 3 2 1 𝑖=1 [( ) ( )] ( ) ( ) 2𝑛 + 2 2𝑛 + 3 2(𝑛 + 1) 2𝑛 + 1 −3 + +3 − 3 3 2 1 ( ) ( ) ( ) ( ) ( ) 2𝑛 + 5 2𝑛 + 4 2𝑛 + 3 2𝑛 + 2 2𝑛 + 1 =3⋅ +3⋅ −3⋅ − 12 ⋅ +3 4 4 3 3 2 ( ) ( ) ( ) 𝑛+1 𝑛 𝑛−1 −44 ⋅ − 13 ⋅ − 15 ⋅ . 2 1 0 (4.25) Finally, we present, without proof, a variation of a similar result in [9]: ∫ 𝑛 𝑝 ∑ 𝑆𝑝 (𝑛) − 𝑐𝑗 𝑆𝑝−𝑗 (𝑛) = 𝑥𝑝 𝑑𝑥, 0

𝑗=1

𝑐𝑗 =

𝑝+1 1 ⋅ ⋅ (−1)𝑗+1 . 𝑗+1 𝑝+1 (

)

References [1] Francois Dubeau, On r-Generalized Fibonacci Numbers, The Fibonacci Quarterly, 27:3, (1989), 221-229. [2] Ivan Flores, Direct Calculation of k-Generalized Fibonacci Numbers, The Fibonacci Quarterly, 5:3, (1967), 259-266. [3] George Grossman, Linear recurrence relations and the binomial coefficients, in Proceedings of XII𝑡ℎ CZECH-POLISH-SLOVAK Mathematical School by the Faculty of Education of University J. E. ´ ı nad Labem, 𝐻𝑢𝑏𝑙𝑜˘ Purkyn˘ 𝑒, Ust´ 𝑠, June 2-4, 2005, pp. 111-119. [4]

, Akalu Tefera and Aklulu Zeleke, Summation Identities for Representation of Certain Real Numbers, International Journal of Mathematics and Mathematical Sciences(e-journal), Volume 2006 , Article ID 78739, 8 pages.

615

14

[5]

GROSSMAN, ZELEKE AND ZHU

2004.

, Akalu Tefera and Aklulu Zeleke, On proofs of certain combinatorial identities, pre-print,

[6]

, Akalu Tefera and Aklulu Zeleke, On Representation of Certain Real Numbers Using Combinatorial Identities, pre-print, 2009.

[7]

and Aklilu Zeleke, On linear recurrence relations and combinatorial identities, Journal of Concrete and Applicable Mathematics, Vol. 1, (2003), No. 3, pp. 229-245, Nova Science Publishers.

[8]

and Sivaram Narayan, On the characteristic polynomial of the 𝑗𝑡ℎ order Fibonacci sequence, Applications of Fibonacci numbers, Vol. 8, (1999), pp. 165-177 Kluwer Acad. Publ., Dordrecht.

[9] Kenneth Ireland and Michael Rosen, A Classical Introduction to Modern Number Theory, Springer, 1990. [10] Georg P´olya and Gabor Szeg¨o, Aufgaben and Lehrs¨ atze aus der Analysis I New York, Dover Publications, 1945. [11] John Riordan. Combinatorial Identities, John Wiley & Sons Inc., 1968. [12] Marko Petkovˇsek, Herbert S. Wilf and Doron Zeilberger, A = B, A. K. Peters, Wellesley, Massachusetts, 1996. [13] Xinyun Zhu and George Grossman, On zeros of polynomial sequences, JoCAA, (Journal of Computational Analysis with Applications,) 11, No. 1, ( 2009), pp. 140-158. . Department of Mathematics, Central Michigan University, Mt. Pleasant, MI 48859 [email protected] Department of Mathematics, Lyman Briggs College, East Lansing, MI 48825, [email protected] Department of Mathematics, University of Texas of the Permian Basin, Odessa, TX 79762, zhu [email protected]

JOURNAL 616 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 616-622, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

On the q-extension of Genocchi polynomials C. S. Ryoo Department of Mathematics, Hannam University, Daejeon 306-791, Korea e-mail: [email protected]

Abstract : In this paper we observe reflection symmetries of the q-extension of Genocchi polynomials, cn,q (x), using numerical investigation. By numerical experiments, we demonstrate a remarkably regular structure of the complex roots of the cn,q (x) for q < 0. Finally, we give a table for the solutions of the q-extension of Genocchi polynomials. Key words : q-extension of Genocchi numbers, q-extension of Genocchi polynomials, Roots of qextension of Genocchi polynomials, Reflection symmetries of the q-extension of Genocchi polynomials 2000 Mathematics Subject Classification : 11S80, 11B68 1. Introduction In [2], T. Kim constructed the q-extension of Genocchi numbers cn,q and polynomials cn,q (x) using generating functions. In order to study the q-extension of Genocchi polynomials cn,q (x), we must understand the structure of the q-extension of Genocchi polynomials cn,q (x). Therefore, using computer, a realistic study for the q-extension of Genocchi polynomials cn,q (x) is very interesting. For related topics the interested reader is referred to [3, 4, 5, 6]. The main purpose of this paper is to consider reflection symmetries of the q-extension of Genocchi polynomials, cn,q (x) for values of the index n by using computer. First, we introduce the q-extension of Genocchi polynomials cn,q (x) (see [2, 3]). Let q be a complex number with |q| < 1. We consider the following generating functions: µ ¶n−1 ∞ ∞ X X t (2n + 1)(1 − q n ) tn 1 tn Fq (t) = cn,q = e 1−q (−1)n−1 , (1) 2n+1 n! 1−q 1−q n! n=0 n=0 and Fq (x, t) =

∞ X n=0

cn,q (x)

µ ¶n−1 ∞ X t tn (2n + 1)(1 − q n ) 1 tn = e 1−q q nx (−1)n−1 , 2n+1 n! 1−q 1−q n! n=0

(2)

By simple calculation in (2), we have µ ¶n−1 n µ ¶ X 1 n (2i + 1)(1 − q i ) cn,q (x) = (−1)i−1 q ix . 2i+1 1 − q 1 − q i i=0 When x = 0, we write cn,q = cn,q (0), which are called the q-extension of Genocchi numbers. cn,q (x) is a polynomial of degree = n in q x . Since ∞ X n=0

Gn (1 − x)

−2t (1−x)(−t) (−t)n = F (1 − x, −t) = −t e n! e +1 ∞ X −2t xt tn = t e = −F (x, t) = − Gn (x) , e +1 n! n=0

RYOO: GENOCCHI POLYNOMIALS

617

we obtain that Gn (x) = (−1)n+1 Gn (1 − x).

(3)

We prove that Gn (x), x ∈ C, has Re(x) = 12 reflection symmetry in addition to the usual Im(x) = 0 reflection symmetry analytic complex functions. The question is: what happens with the reflection symmetry (3), when one considers the q-extension of Genocchi polynomials? We are going now to reflection at 12 of x on the q-extension of Genocchi polynomials. For n ≥ 0, we have c∗n,q (x) ≡ cn,q−1 (1 − x) = (−1)n−1 q n cn,q (x).

(4)

(4) is the q-analog of the classical reflection formula (3). c∗n,q (x)(q > 0) has Im(x) = 0 reflection symmetry analytic complex functions (Figure 3). c∗n,q (x) has not Re(x) = 1/3 reflection symmetry ( Figure 3). If c∗n,q (x) = 0(q > 0), then c∗n,q−1 (1 − x) = c∗n,q (x∗ ) = c∗n,q−1 (1 − x∗ ) = 0, where ∗ denotes complex conjugation. 2. Zeros of the c∗n,q (x) In order to study c∗n,q (x), we must understand the structure of the q-extension of Genocchi polynomials. In this section, by numerical investigation, we examine properties of the figures, look for patterns, and make open problems. First, we display the shapes of the c∗n,q (x) and we investigate the zeros of the c∗n,q (x) . For n = 1, · · · , 10, we can draw a plot of the c∗n,q (x), respectively. This shows the ten plots combined into one. We display the shape of cn,q (x), c∗n,q (x), n = 1, · · · , 10, −1 ≤ x ≤ 1.

1.25 0.6 1 0.75

0.4

c*n,q HxL

c*n,q HxL

0.5

0.2 0.25 0 0 -0.25

-1

-0.5

0

0.5

x

Figure 1: Curvers of c∗n,q (x), q = 1/3

1

-1

-0.5

0

0.5

1

x

Figure 2: Curvers of c∗n,q (x), q = 1/2

We investigate the beautiful zeros of the c∗n,q (x) by using a computer. We plot the zeros of the c∗n,q (x) for n = 30, 40, 50, 60, q = 1/3 and x ∈ C. (Figure 3). We plot the zeros of the c∗n,q (x) for n = 30, 40, 50, 60, q = −1/3 and x ∈ C. (Figure 4). We plot the zeros of the cn,q (x) for n = 30, 40, 50, 60, q = −1/3 and x ∈ C. (Figure 5). We observe a remarkably regular structure of the complex roots of the c∗n,q (x). We hope to verify a remarkably regular structure of the complex roots of the c∗n,q (x)(Table 1). Next, we calculate an approximate solution satisfying c∗n,q (x), x ∈ R. The results are given in Table 2.

618

RYOO: GENOCCHI POLYNOMIALS

ImHxL

2

2

1.5

1.5

1

1

0.5

0.5

ImHxL

0

0

-0.5

-0.5

-1

-1

-1.5

-1.5

0

1

2

4

3

0

1

ReHxL

ImHxL

2

2

1.5

1.5

1

1

0.5

0.5

ImHxL

0

-0.5

-1

-1

-1.5

-1.5

1

3

4

2

3

4

0

-0.5

0

2

ReHxL

2

4

3

0

ReHxL

1

ReHxL

Figure 3: Zeros of c∗n,q (x) for n = 10, 20, 30, 40, q = 1/3

Table 1. Numbers of real and complex zeros of c∗n,q (x) degree n

q = − 13 real zeros complex zeros

q = 13 real zeros complex zeros

2

0

1

1

0

4

0

3

1

2

6

0

5

1

4

8

0

7

3

4

10

0

9

3

6

12

0

11

3

8

14

0

13

3

10

RYOO: GENOCCHI POLYNOMIALS

ImHxL

1.5

1.5

1

1

0.5

0.5

ImHxL

0

0

-0.5

-0.5

-1

-1

-0.75 -0.5 -0.25

0

0.25 0.5 0.75

619

1

-0.75 -0.5 -0.25

ReHxL

ImHxL

1.5

1.5

1

1

0.5

0.5

ImHxL

0

-0.5

-1

-1

0

0.25 0.5 0.75

1

0.25 0.5 0.75

1

0

-0.5

-0.75 -0.5 -0.25

0

ReHxL

0.25 0.5 0.75

1

-0.75 -0.5 -0.25

ReHxL

0

ReHxL

Figure 4: Zeros of c∗n,q (x) for n = 10, 20, 30, 40, q = −1/3

Table 2. Approximate solutions of c∗n,q (x) = 0, q = 1/3, x ∈ R degree n

x

2

0.0653041

3

−0.19615,

4 5

0.452437 −0.248071,

6 7 8 9 10

0.268175

0.609023

0.743617 −0.144954, −0.35968,

−0.0411608,

0.0527129, −0.397998,

0.861249 0.965587

1.05928

0.137934,

1.14427

620

RYOO: GENOCCHI POLYNOMIALS

1.5

1.5

1

1

0.5

0.5

ImHxL

ImHxL 0

0

-0.5

-0.5

-0.75 -0.5 -0.25

0

0.25 0.5 0.75

1

-0.75 -0.5 -0.25

ReHxL

0

0.25 0.5 0.75

1

0.25 0.5 0.75

1

ReHxL

1.5

1.5

1

1

0.5

0.5

ImHxL

ImHxL 0

0

-0.5

-0.5

-0.75 -0.5 -0.25

0

0.25 0.5 0.75

1

-0.75 -0.5 -0.25

ReHxL

0

ReHxL

Figure 5: Zeros of cn,q (x) for n = 10, 20, 30, 40, q = −1/3

Table 3. Approximate solutions of cn,q (x) = 0, q = 1/3, x ∈ R degree n

x

2

0.0653041

3

−0.19615,

4 5

0.452437 −0.248071,

6 7 8 9 10

0.268175

0.609023

0.743617 −0.144954, −0.35968,

−0.0411608,

0.0527129, −0.397998,

0.861249 0.965587

1.05928

0.137934,

1.14427

We calculated an approximate solution satisfying cn,q (x), c∗n,q (x), q = −1/3, x ∈ R. The results are

RYOO: GENOCCHI POLYNOMIALS

given in Table 4 and Table 5. Table 4. Approximate solutions of c∗n,q (x), q = −1/3, x ∈ C degree n

x

2

−0.055099 + 0.157561i

4

−0.263443 − 0.136379i,

−0.082275 + 0.235274i,

0.291022 + 0.0575166i 6

−0.279677 + 0.0472548i,

−0.257121 − 0.289292i,

−0.0911906 + 0.260769i,

0.189278 + 0.211248i,

0.381371 − 0.0660123i 8

−0.309726 − 0.0831741i,

−0.260268 + 0.127805i,

−0.241352 − 0.386845i

− 0.095592 + 0.273355i,

0.123901 + 0.262149i,

0.294061 + 0.12797i,

0.429832 − 0.152132i Table 5. Approximate solutions of cn,q (x), q = −1/3, x ∈ C degree n

x

2

−0.055099 − 0.157561i

4

−0.263443 + 0.136379i,

−0.082275 − 0.235274i,

0.291022 − 0.0575166i 6

−0.279677 − 0.0472548i, −0.0911906 − 0.260769i,

−0.257121 + 0.289292i, 0.189278 − 0.211248i,

0.381371 + 0.0660123 8

−0.309726 + 0.0831741i, −0.241352 + 0.386845i 0.123901 − 0.262149i,

−0.260268 − 0.127805i, − 0.095592 − 0.273355i, 0.294061 − 0.12797i,

0.429832 + 0.152132i

3. Directions for Further Research Finally, we shall consider the more general problems. Prove or disprove: c∗n,q (x) = 0 has n − 1 distinct solutions. Find the numbers of complex zeros Cc∗n,q (x) of c∗n,q (x), Im(x) 6= 0. Prove or disprove: Since n − 1 is the degree of the polynomial c∗n,q (x), the number of real zeros Rc∗n,q (x) lying on the real plane Im(x) = 0 is then Rc∗n,q (x) = n − 1 − Cc∗n,q (x) , where Cc∗n,q (x) denotes complex zeros. See Table 1 for tabulated values of Rc∗n,q (x) and Cc∗n,q (x) . The open question is: what happens with the reflection symmetry (4), when one considers the c∗n,q (x) for q < 0 ? (See Figures 4, 5). Find the equation of envelope curves bounding the real zeros lying on the plane. The author has no doubt that investigation along this line will lead to a new approach employing numerical method in the field of research of the c∗n,q (x) to appear in mathematics and physics. The reader may refer to [3, 4, 5, 6] for the details

621

622

RYOO: GENOCCHI POLYNOMIALS

References [1] T. Kim, On the q-extension of Euler and Genocchi numbers, J. Math. Anal. Appl., 326 (2007), 1458-1465. [2] T. Kim, L.-C. Jang, H. K. Pak, A note on q-Euler and Genocchi numbers, Proc. Japan Acad., 77 A (2001), 139-141. [3] T. Kim, C. S. Ryoo, Exploring the q-Euler numbers and polynomials, Journal of concrete and Applicable Mathematics , 7(4) (2009), 349-357. [4] C.S.Ryoo, A numerical computation on the structure of the roots of q-extension of Genocchi polynomials, Applied Math. Letters, 21 (2008), 348-354. [5] C.S.Ryoo, Calculating zeros of the twisted Genocchi polynomials, Advanced Studies in Contemporary Mathematics, 17 (2008), 147-159. [6] C.S.Ryoo, Y. S. Yoo, A note on Euler numbers and polynomials, Journal of concrete and Applicable Mathematics, 7(4) (2009), 341-348.

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 623-630, 2010, COPYRIGHT 2010 EUDOXUS PRESS, 623 LLC

On Best Simultaneous Approximation in Semi Metric Spaces H. K. Pathak1 and Satyaj Tiwari2 1

Abstract In this paper, the existence of invariant best simultaneous approximation in semi metric space is proved. In doing so, we have used a recent result of Moutawakil regarding the fixed points for set-valued mappings.

1. Introduction In the realm of best approximation theory, it is vaible, meaningful and potentially productive to know whether some useful properties of the function being approximated is inherited by the approximating function. In this perspective, Meinardus [8] observed the general principle that could be applied, while doing so the author has employed a fixed point theorem as a tool to establish it. The result of Meinardus was further generalized by Habiniak [5], Smoluk [15] and Subrahmanyam [16]. On the other hand, Beg and Sahazad [2], Fan [4], Hicks and Humphries [6], Reich [10], Singh [13],[14] and many others have used fixed point theorems in approximation theory, to prove existence of best approximation. Various types of applications of fixed point theorems may be seen in Klee [7], Meinardus [8] and Vlasov [18]. Some applications of the fixed point theorems to best simultaneous approximation is given by Sahney and Singh [11]. For the detail survey of the subject we refer the reader to Cheney [3]. In this paper, we prove the existence of invariant best simultaneous approximation in semi metric space, while doing so, we use the recent result of 1

Corresponding Author,E-mail:[email protected]

1

624

PATHAK-TIWARI: BEST SIMULTANEOUS APPROXIMATION

Moutawakil [19] on the fixed points for set-valued mappings.

2. Preliminaries and Definitions Let (X, d) be a metric space. Let(CB(X), H) denote the hyperspace of nonempty closed bounded subsets of X, where H is the Housdroff metric induced by d, that is, H(A, B) = max{sup d(a, B), sup d(b, A)} a∈A

b∈B

for all A, B ∈ CB(X), where d(x, A) = inf y∈A {d(x, y) : x ∈ X} and A ⊂ X. Although the fixed point theory for single valued maps is very rich and well developed, the multivalued case is not. Note that the multivalued mappings play a major role in many areas as in studying disjunctive logic programs. On the other hand; it has been observed that (see example [22]) that the distance function used in certain metric theorems proofs need not satisfy the triangular inequality nor d(x, x) = 0 for all x. Motivated by this fact, Hicks and Rhoades [22] established some common fixed point theorems in symmetric spaces and proved that very general probabilistic structures admits a compatible symmetric or semi-metric. Recall that a symmetric on a set X is a non negative real valued function d on X x X such that (i)d(x, y) = 0 if and only if x = y, (ii) d(x, y) = d(y, x). In order to unify the notation, we need the following notation: (W.4) Given {xn },{yn } and x in X, lim d(xn , x) = 0 and lim d(xn , yn ) = 0 n→∞

n→∞

imply that lim d(yn , x) = 0. n→∞ A sequence in X is called a d−cauchy sequence if it satisfies the usual metric condition. X is S−complete if for every d−cauchy sequence (xn ), there exists x in X with lim d(xn , x) = 0 n→∞

The Hausdorff distance in a symmetric space. Definition. Let (X, d) be a symmetric space and A a nonempty subset of X. (i) We say that A is d − closed iff A¯d = A where

2

PATHAK-TIWARI: BEST SIMULTANEOUS APPROXIMATION

A¯d = {x ∈ X : D(x, A) = 0} and d(x, A) = inf {d(x, y) : y ∈ A} (ii) We say that A is d-bounded iff δd (A) < ∞ where δd (A) = sup{d(x, y) : x, y ∈ A} The following definition is a generalization of the well-known Hausdorff distance to the setting of symmetric case. Definition. Let (X, d) be a d−bounded symmetric space and let C(X) be the set of all nonempty d−closed subset of (X, d). Consider the function H : 2X x 2X → R defined by

H(A, B) = max{sup d(a, B), sup d(b, A)} a∈A

b∈B

for all A, B ∈ C(X). Remark. It is easy to see that (C(X), D) is a symmetric space. Now we give the notion of convex structure introduced by Gudder [20](see also, Petrusel [21]). Let X be a set and F : [0, 1] × X × X → X a mapping. Then the pair (X, F ) forms a convex prestructure. Let (X, F ) be a convex prestructure. If F satisfies the following conditions: (i) F (λ, x, F (µ, y, z)) = F (λ + (1 − λ)µ, F (λ(λ + (1 − λ)µ)−1 , x, y), z) for every λ, µ ∈ (0, 1) with λ + (1 − λ)µ 6= 0 and x, y, z ∈ X. (ii) F (λ, x, x) = x for any x ∈ X and λ ∈ (0, 1), then (X, F ) forms a semi-convex structure. If (X, F ) is a semi-convex structure, then (SC1) F (1, x, y) = x for any x, y ∈ X. A semi-convex structure is said to be regular if (SC2) λ ≤ µ ⇒ F (λ, x, y) ≤ F (µ, x, y) where λ, µ ∈ (0, 1). A semi-convex structure (X, F ) is said to form a convex structure if F also satisfies the conditions

3

625

626

PATHAK-TIWARI: BEST SIMULTANEOUS APPROXIMATION

(iii) F (λ, x, y) = F (1 − λ, y, x) for every λ ∈ (0, 1) and x, y ∈ X. (iv) if F (λ, x, y) = F (λ, x, z) for some λ 6= 1, x ∈ X then y = z. Let (X, F ) be a convex structure. A subset Y of X is called (a) Fstarshaped if there exist p ∈ Y so that for any x ∈ Y and λ ∈ (0, 1), F (λ, x, p) ∈ Y . (b) F-convex if for any x, y in Y and λ ∈ (0, 1), F (λ, x, y) ∈ Y . For F (λ, x, y) = λx + (1 − λ)y, we obtain the known notion of starshaped convexity from linear spaces. Petrusel [20] noted with an example that a set can be a F -semi convex structure without being a convex structure. Let (X, F ) be a semi-convex structure. A subset Y of X is called F semi-starshaped if there exists p ∈ Y so that for any x ∈ Y and λ ∈ (0, 1), F (λ, x, p) ∈ Y . A Banach space X with semi-convex structure F is said to satisfy condition (P1 ) at p ∈ K (where K is semi-starshaped and p is star centre) if F is continuous relative to the following argument : for any x, y ∈ X, λ ∈ (o, 1) k (F (λ, x, p) − F (λ, y, p) ≤ λ k x − y k .

To prove our main result(Theorem 3.1 below), we shall make use of the following result due to Moutawakil ([19], theorem 2.2.1). Theorem A. Let (X, d) be a d−bounded and S−complete symmetric space satisfying (W − 4) and let T : X → C(X) be a set-valued mapping such that H(T x, T y) ≤ kd(x, y) for all x, y ∈ X, where k ∈ [0, 1). Then there exist u ∈ X such that u ∈ T u. Let (X, d) be a metric space and G a nonempty subset of X. Suppose A ∈ C(X), the set of nonempty d−closed subsets of (X, d),then we write

rG (A) = infg∈G supa∈A d(a, g) centG (A) = {g0 ∈ G : supa∈A d(a, g0 ) = rG (A)}. The number rG (A) is called the Chebyshev radius of A w.r.t G and an element y0 ∈ centG (A) is called a best simultaneous approximation of A w.r.t G. If A = {x}, then rG (A) = d(x, G) and centG (A) is the set of all best approximations of x out of G. We also refer the reader to Milman [9] for further details.

4

PATHAK-TIWARI: BEST SIMULTANEOUS APPROXIMATION

3. Main Results Now we state and prove our main result. Theorem 3.1 Let (X, d) be a d-bounded and S-complete semi-metric space with semi-convex structure satisfying condition (P1 ) and T : X → C(X) be a set-valued mapping. Let G ∈ C(X). For A ⊂ C(X), if centG (A) is nonempty,compact, semi-starshaped, T -invariant and T satisfy the following conditions: (i) T is continuous on centG (A), and (ii) H(T x, T y) ≤ kd(x, y) for all x, y ∈ centG (A) with x 6= y, d(x, y) > 0, then centG (A) contains a T -invariant point. Proof. Let p be the starcentre of centG (A). Then F (x, p, λ) ∈ centG (A) for each x ∈ centG (A). Let {kn }∞ n=1 be a real sequence with 0 ≤ kn < 1 such that kn → 1 as n → ∞. Define Tn : centG (A) → C(centG (A)) by [ F (kn , y, p) Tn x = F (kn , T x, p) = y∈T x

for all x ∈ centG (A). Since p is semi-starcenter of centG (A) and T (centG (A)) ⊆ centG (A), It follows that Tn maps centG (A) to itself for each n. Now applying condition (P1 ), we obtain H(Tn x, Tn y) = H(F (kn , T x, p), F (kn , T y, p)) ≤ kn H(T x, T y)) 0

≤ kn d(x, y) i.e., 0

H(Tn x, Tn y) ≤ kn d(x, y) for all x, y ∈ centG (A), It follows by Theorem A that each Tn has a fixed point, say zn . Since centG (A) is complete, {zn } has a convergent subsequence {zni } such that zni → z(say) as i → ∞ for some z ∈ X. Since zni ∈ Tni zni = F (kni , T zni , p)

5

627

628

PATHAK-TIWARI: BEST SIMULTANEOUS APPROXIMATION

and kni → 1 as i → ∞, it follows that z ∈ T z. Hence centG (A) contains a T − invarient point. This complete the proof. Remark 3.2 Let (X, d) be a d-bounded and S-complete semi-metric space with semi-convex structure satisfying condition (P1 ) and let T be a self map on X. Let G ∈ X. For A ⊂ X, if centG (A) is nonempty, compact, semistarshaped, T -invariant and T satisfy the following conditions: (i) T is continuous on centG (A), and (ii) (T x, T y) ≤ kd(x, y) for all x, y ∈ centG (A) with x 6= y, d(x, y) > 0, then centG (A) contains a T -invariant point.

References [1] Beg, I. and Azam, A., Fixed points of multivalued locally contractive mappings, Boll. U.M.I., (7) 4-A (1990),227-233. [2] Beg, I. and Shahzad, N., An application of a fixed point theorem to best approximation, Approx. Theory and its Appl., 10:3(1994), 1-4. [3] Cheney, E. W., Application of fixed point theorems to approximation theory, Theory of Approximations, Academic Press (1976), 1-8. [4] Fan, Ky., Extension of two fixed point theorems of F.E.Browder, Math Z.,112 (1969), 234-240. [5] Habiniak, L.,Fixed point theorems and invarient approximations, J. Approximation Theory, 56(1989), 241-244. [6] Hicks, T.L. and Humphries M.D., A note on fixed point theorems, J.Approximation Theory, 34 (1982),221-222. [7] Klee, V., Convexity of chebyshev sets, Math. Ann., 142(1961), 292-304. [8] Meinardus, G., Invaiauz Bei Lineaeu Approximation, Arch.Rational Mech.Anal., 14(1963),301-303. [9] Milman, P.D., On best simultaneous approximation in normed linear spaces, J. Approximation Theory, 20(1977),223-238. 6

PATHAK-TIWARI: BEST SIMULTANEOUS APPROXIMATION

[10] Reich,S., Approximate selection,best approximations,fixed points and invarient sets, J.Math.Anal.Appl., 62(1978),104-113. [11] Sahney, B.N. and Singh S.P., On best simultaneous approximation, Approximation Theory III, Academic Press (1980),783-789. [12] Singh, S.P., Application of fixed point theorems in approximation theory, Applied Nonlinear Analysis, Academic Press (1979), 389-394. [13] ——-, Application of a fixed point theorem to approximation theory, J. Approx. Theory, 25(1979), 88-89. [14] ——-, Some results on best approximation in locally convex spaces, J. Approx. Theory, 28(1980), 72-76. [15] Smoluk, A., Invarient approximations, Mathematyka [Polish],17(1981), 17-22. [16] Subrahmanyam, P.V., An application of a fixed point theorem to best approximations, J. Approx. Theory, 20(1977),165-172. [17] Takahashi, W., A convexity in metric spaces and nonexpancive mappings I, Kodai Math. sem. Rep., 22(1970), 142-149. [18] Vlasov, L.P., Chebyshev sets in Banach spaces, Soviet Math. Polody, 2(1961),1373-1374. [19] Driss El Moutawakil, A fixed point Theorem for multivalued maps in symmetric spaces, Applied Mathematics E-Notes, 4(2004), 26-32. [20] Gudder, S.P., A general theory of convexity, Rend. Sem.Mat.Milano, 49,(1979), 89-96. [21] Petrusel, A., Starshaped and fixed points, Seminar on fixed point theory (Cluj-Napoca), Stud.Univ.”Babes-Bolyai”, Nr.3,(1987), 19-24. [22] T.L. Hicks and B.E. Rhoades, Fixed point theory in symmetric spaces with applications to probabilistic spaces, Nonlinear Analysis 36 (1999), 331-344. 1

H. K. Pathak School of Studies in Mathematics Pt. Ravishankar Shukla University, Raipur (C.G) 492010, India

7

629

630

PATHAK-TIWARI: BEST SIMULTANEOUS APPROXIMATION

E-mail : [email protected] 2

Satyaj Tiwari Department of Mathematics Shri Shankaracharya Institute of Professional Management and Technology, P.O. Sejabahar, Raipur (C.G) 490021, India E-mail : [email protected]

8

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 631-637, 2010, COPYRIGHT 2010 EUDOXUS PRESS, 631LLC

ON BEST UNIFORM APPROXIMATION OF PERIODIC FUNCTIONS BY TRIGONOMETRIC POLYNOMIALS MICHAEL I. GANZBURG

∗ ≤ Abstract. We discuss inequalities of the form C1 En (f 0 )L∗2π ≤ En (f )C2π C2 En (f 0 )L∗2π between the errors of approximation in the uniform and integral metrics of periodic functions f and f 0 by trigonometric polynomials. As ap∗ in terms of the plications, we obtain upper and lower estimates for En (f )C2π Fourier coefficients of a function f .

1. Introduction ∗ C2π

be the Banach space of all 2π-periodic continuous functions f on the Let ∗ real line R with the finite norm ||f ||C2π := maxx∈[0,2π) |f (x)|; L∗2π the Banach space of all 2π-periodic measurable functions f on [0, 2π) with the finite norm R 2π ||f ||L∗2π := 0 |f (x)|dx, and Tn the class of all trigonometric polynomials of degree ≤ n. For n = 0, 1, . . . , we define the approximation errors by ∗ := ∗ , En (f )C2π inf ||f − Tn ||C2π

En (f )L∗2π := inf ||f − Tn ||L∗2π .

Tn ∈Tn

Tn ∈Tn

Throughout the paper C, C1 , . . . denote positive constants independent of n. The same symbol does not necessarily denote the same constant in different occurrences. In this paper we discuss upper and lower estimates for the error of uniform approximation of periodic functions by trigonometric polynomials in terms of their ∗ Fourier coefficients. The estimates for En (f )C2π are based on inequalities of the form 0 ∗ ≤ C2 (n)En (f )L∗ , C1 (n)En (f 0 )L∗2π ≤ En (f )C2π 2π

where C2 (n) = 1/2, n ≥ 1 (Theorem 2.1) and C1 (n) = [4(n+1)]−1 , n ≥ 1 (Theorem 3.1). Our approach is illustrated by five example. 2. Upper Estimates ∗ in terms of the Fourier coefficients We first discuss upper estimates for En (f )C2π of an individual function f . Jackson’s theorems are inapplicable in this case, while various linear approximation methods have proved to be efficient [8], [1] [9], [4]. In particular, if the series

f (x) = a0 /2 +

∞ X

(ak cos kx + bk sin kx)

(2.1)

k=1

Key words and phrases. Best approximation, trigonometric polynomials, Fourier coefficients. 1

632

2

MICHAEL I. GANZBURG

is absolutely convergent, then the following trivial estimate En (f )

∗ C2π

∞ X



(|ak | + |bk |)

k=n+1

generated by the Fourier approximation can be efficient for rapidly decreasing Fourier coefficients [4] as well as for slowly decreasing ones. The latter state∗ ment is supported by the estimate En (fλ )C2π ≤ C(n + 1)−λ+1 , where fλ (x) := P∞ −λ cos kx, λ > 1. Lower estimate (3.14) given in Section 3 shows that the k=1 k upper estimate cannot be improved. Note that for an integer λ > 1 an asymptotic ∗ formula for En (fλ )C2π was found in [2]. ∗ , which can be applied to conHere we consider upper estimates for En (f )C2π ditionally and absolutely convergent series (2.1). Our approach is based on the following simple result for functions of bounded variation. ∗ Theorem 2.1. Let f ∈ C2π be an absolutely continuous function with f 0 ∈ L∗2π . Then the following inequality holds:

P∞

0 ∗ ≤ (1/2)En (f )L∗ . En (f )C2π 2π

(2.2)

Proof. Let B1 (x) := k=1 k −1 sin kx be a Bernoulli function and let Qn ∈ Tn be a polynomial of best mean approximation to f 0 . Then the following relation holds [5, Eq. 1.5.1]: Z 1 2π f (x) = a0 /2 + B1 (x − t)f 0 (t)dt. π 0 In addition, the function Tn (x) := a0 /2 +

1 π

Z



B1 (x − t)Qn (t)dt 0

belongs to Tn . Therefore, ∗ ≤ ||f − Tn ||C ∗ En (f )C2π 2π

≤ =

1 π

Z sup |B1 (t)| t∈[0,2π)



|f 0 (t) − Qn (t)|dt

0

(1/2)En (f 0 )L∗2π .

This proves (2.2).

¤

There are several results on finding or estimating En (g)L∗2π . Three of them are given in the following lemma. Lemma 2.1. (a) If a sequence {Bk }∞ k=n+1 of the Fourier coefficients of an odd P∞ function g(x) = k=1 Bk sin kx ∈ L∗2π is 2-monotone, that is, Bk > 0, Bk −Bk+1 ≤ 0, and Bk − 2Bk+1 + Bk+2 ≥ 0 for k ≥ n + 1, then g ∈ L∗2π and En (g)L∗2π = 4

∞ X B(2m+1)(n+1) . 2m + 1 m=0

(b) If a sequence {Ak }∞ k=n+1 of the Fourier coefficients of an even function g(x) = P∞ A0 /2 + k=1 Ak cos kx is 3-monotone, that is, Ak > 0, Ak − Ak+1 ≤ 0, Ak −

633

ON UNIFORM APPROXIMATION OF PERIODIC FUNCTIONS

3

2Ak+1 + Ak+2 ≥ 0 and Ak − 3Ak+1 + 3Ak+2 − Ak+3 ≤ 0 for k ≥ n + 1, then g ∈ L∗2π and ∞ X

En (g)L∗2π = 4

(−1)m

m=0

(c) If g(x) = A0 /2 +

P∞ k=1

Ak cos kx ∈ L∗2π , then

à En (g)L∗2π ≤ C

A(2m+1)(n+1) . 2m + 1

|An+1 | + |A2n+2 | +

∞ X

! k|Ak+n − 2Ak+n+1 + Ak+n+2 | .

k=1

Statements (a) and (b) of Lemma 2.1 were proved by Nagy [6] (see also [8, Sections 2.11.5 and 2.13.32]), while statement (c) was established by the author [4, Lemma 3.4]. Note that the condition g ∈ L∗2π in Lemma 2.1(a) is equivalent to the statement that g is Fourier series and, in addition, it is equivalent to Pthe ∞ convergence of the series k=1 Bk /k. ∗ . Combining Theorem 2.1 and Lemma 2.1, we obtain upper estimates for En (f )C2π Let us illustrate this approach by the following examples. P∞ cos kx P∞ kx 0 Example 2.1. f (x) = k=2 ksin k=2 logq k logq k , q > 0. Then the series f (x) = −q ∞ 0 ∗ converges at every x ∈ (0, 2π) and f ∈ L2π since {log k}k=2 is a convex sequence [10, Theorem 5.1.5]. Moreover, {log−q k}∞ k=2 is a 3-monotone sequence. Therefore by Theorem 2.1 and Lemma 2.1(b), we obtain for n ≥ 1, En (f )

∗ C2π

0



(1/2)En (f )



2log−q (n + 1).

L∗ 2π

≤2

∞ X

(−1)m (2m + 1) logq [(2m + 1)(n + 1)] m=0 (2.3)

Note that in case q ∈ (0, 1] the Fourier series for f is conditionally convergent. In addition, we remark that an estimate −q ∗ ≤ Clog En (f )C2π (n + 1)

(2.4)

can be deduced from Lemma 1(c) as well. Indeed, setting Ak := log−q k, k ≥ 2, we have for n ≥ 2 ∞ X

∞ X

k|Ak+n − 2Ak+n+1 + Ak+n+2 | =

k=n+1 ∞ X



k=n+1

k

max

k≤y≤k+2

(k − n)|Ak − 2Ak+1 + Ak+2 |

k=n+1 ∞ X

|d2 (log−q y)/dy 2 | ≤ C

k −1 log−(q+1) k

k=n+1

≤ C log−q (n + 1). This implies (2.4). P∞ Example 2.2. f (x) = k=1 k λ exp(−A k α ) cos kx, λ ∈ R, A > 0, α > 0. Then P ∞ f 0 (x) = − k=1 k λ+1 exp(−A k α ) sin kx ∈ L∗2π . Since for n0 large enough, the

634

4

MICHAEL I. GANZBURG

sequence {k λ+1 exp(−A k α )}∞ k=n+1 is 2-monotone for n ≥ n0 , we obtain from Theorem 2.1 and Lemma 2.1(a) for n ≥ n0

×

0 ∗ ≤ (1/2)En (f )L∗ En (f )C2π = 2(n + 1)λ+1 2π Ã ! ∞ X α λ α α exp(−A (n + 1) ) + (2m + 1) exp(−A (2m + 1) (n + 1) ) m=1

≤ ×

λ+1

2(n + 1) µ Z exp(−A (n + 1)α ) +





λ

α

α

(2y + 1) exp(−A (2y + 1) (n + 1) )dy 0



2(1 + Cn−α )(n + 1)λ+1 exp(−A (n + 1)α )



C(n + 1)λ+1 exp(−A (n + 1)α ).

(2.5)

3. Lower Estimates ∗ , where f is an odd or even function with Efficient lower estimates for En (f )C2π monotone Fourier coefficients, are given in the following lemma. P∞ ∗ , where Lemma 3.1. (a) For n ≥ 1 and f (x) = a0 /2 + k=1 ak cos kx ∈ C2π ∞ {ak }k=n+1 is a positive non-increasing sequence, the following inequality holds: ∗ ≥ (1/4) sup (N + 1)aN +n . En (f )C2π

(3.1)

N ∈N

P∞ ∗ , where {bk }∞ (b) For n ≥ 1 and f (x) = k=1 bk sin kx ∈ C2π k=n+1 is a positive non-increasing sequence, the following inequality holds: ∗ ≥ (1/4) sup (N + 1)bN +n . En (f )C2π

(3.2)

N ∈N

Inequalities (3.1) and (3.2) follow from more general estimates obtained by Newman and Rivlin [7] and the author [3], respectively. Lower estimates for other classes of functions were developed in [4]. ∗ , where a function f of bounded variation satisfies Lower estimates for En (f )C2π some additional conditions, are based on the following theorem, which is interesting in itself. ∗ be an absolutely continuous function with f 0 ∈ L∗2π . Theorem 3.1. Let f ∈ C2π In addition, we assume that if Qn ∈ Tn is a polynomial of best mean approximation to f 0 , then f 0 − Qn 6= 0 a. e. on R and there exists β ∈ [0, 2π) such that f 0 − Qn has exactly 2(n + 1) sign changes on [β, 2π + β). Then −1 ∗ ≥ [4(n + 1)] En (f )C2π En (f 0 )L∗2π .

(3.3)

Equality in (3.3) holds for the function fn (x) := −

∞ X 4 cos[(2m + 1)(n + 1)x] . π(n + 1) m=0 (2m + 1)2

(3.4)

Proof. Let Qn ∈ Tn be a polynomial of best approximation to f 0 in L∗2π and let x1 , . . . , x2(n+1) be the points from (β, 2π + β), in which f 0 − Qn changes its sign. Setting x0 := β, x2(n+1)+1 := 2π + β and using the criterion of best approximation

635

ON UNIFORM APPROXIMATION OF PERIODIC FUNCTIONS

5

of f 0 in L∗2π [8, Section 2.8.1], we have for any polynomial Tn ∈ Tn Z 2π+β Z xi+1 2(n+1) X 0 = Tn (x) sgn(f 0 (x) − Qn (x))dx = ± (−1)i Tn (x)dx β

Ã

= ± 2

Z

2n+1 X

Z

xi+1

(−1)i+1

2π+β

Tn (x)dx − β

i=0

i=0

!

xi

Tn (x)dx .

(3.5)

β

Next, the following trivial relation holds for any constant C: 2n+1 X

(−1)i+1 C = 0.

(3.6)

i=0

Further applying (3.5) and (3.6)Pto Tn (x) = sin kx or Tn (x) = cos kx, 1 ≤ k ≤ n, n we obtain that for any Tn∗ (x) = k=1 (ak cos kx + bk sin kx), 2n+1 X

(−1)i+1 Tn∗ (xi+1 ) = 0.

(3.7)

i=0

It follows from (3.6) that (3.7) is valid for any Tn∗ ∈ Tn . Let us define a measure µ with its support on {x1 , . . . , x2(n+1) } by µ(xi+1 ) = 2(−1)i+1 ,

0 ≤ i ≤ 2n + 1.

(3.8)

Tn∗

Then µ is orthogonal to Tn since for any ∈ Tn , (3.7) is equivalent to the relation Z 2π+β Tn∗ (x)dµ(x) = 0. (3.9) β

Therefore taking account of the facts that f 0 − Qn 6= 0 a. e. on R and f 0 − Qn has exactly 2(n + 1) sign changes on [β, 2π + β), we get from (3.5), (3.6) (3.8), and (3.9) that Z 2π+β En (f 0 )L∗2π = (f 0 (x) − Qn (x)) sgn(f 0 (x) − Qn (x))dx β   Z xi+1 Z 2π+β 2(n+1) X = ± 2 (−1)i+1 (f 0 (x) − Qn (x))dx − (f 0 (x) − Qn (x))dx i=0

β

β

¯Z ¯ 2(n+1) ¯ 2π+β ¯ X ¯ ¯ ∗ , = ±2 (−1)i+1 f (xi+1 ) = ¯ f (x)dµ(x)¯ ≤ var µ En (f )C2π ¯ β ¯

(3.10)

i=0

where

Z

2π+β

var µ :=

|µ(x)| = 4(n + 1).

(3.11)

β

Therefore (3.3) follows from (3.10) and (3.11). To complete the proof, we note that the function fn defined by (3.4) is continuous and fn0 (x) = sgn sin[(n + 1)x] ∈ L∗2π . R 2π Since 0 fn0 (x)Tn∗ (x)dx = 0 for any Tn∗ ∈ Tn , we conclude that Qn (x) = 0 is a polynomial of best mean approximation to fn0 . Moreover for 0 < β < π/(2n + 2), the function fn0 −Qn = fn0 has exactly 2(n+1) sign changes on (β, 2π+β). Therefore fn satisfies all conditions of Theorem 3.1. Finally we have Z 2π 0 ∗ En (fn )L2π = |fn0 (x)|dx = 2π, (3.12) 0

636

6

MICHAEL I. GANZBURG

and by Chebyshev’s theorem, ∗ = ||fn ||C ∗ = π/(2n + 2), En (fn )C2π 2π

(3.13)

since for k = 0, 1, . . . , 2n + 1, µ ¶ ∞ X kπ 4 π 1 fn = (−1)k+1 = (−1)k+1 . n+1 π(n + 1) m=0 (2m + 1)2 2(n + 1) Thus (3.12) and (313) imply −1 ∗ = [4(n + 1)] En (fn )C2π En (fn0 )L∗2π .

This completes the proof of the theorem.

¤

The following corollary shows that estimate (3.3) holds for functions satisfying Nagy’s conditions. Corollary 3.1. If the Fourier coefficients of a function g = f 0 ∈ L∗2π satisfy conditions of Lemma 2.1(a) or Lemma 2.1(b), then inequality (3.3) holds. Proof. Nagy [6] (see also [8, Sections 2.11.5 and 2.13.32]) provedP that if conditions ∞ 0 of Lemma 1(a) or Lemma 1(b) are satisfied for g(x) = f (x) = k=1 Bk sin kx or P∞ 0 0 g(x) = f (x) = A0 /2 + k=1 Ak cos kx, then sin(n + 1)x (f (x) − Qn (x)) ≥ 0 or cos(n + 1)x (f 0 (x) − Qn (x)) ≥ 0 and f 0 − Qn 6= 0 a. e. on R. Here Qn is a polynomial of best mean approximation to f 0 . Therefore the conditions of Theorem 3.1 are satisfied and (3.3) follows. ¤ Lower estimates of Lemma 3.1 and Corollary 3.1 are used in the following examples. Example 3.1. f (x) =

P∞

sin kx k=2 k logq k ,

q > 0. Then using Lemma 3.1(b), we have

1 N +1 1 C sup ≥ ≥ . q q q 4 N ∈N (N + n) log (N + n) 8 log (2n) log (n + 1) P∞ Example 3.2. f (x) = k=1 k λ exp(−A k α ) cos kx, λ ∈ R, A > 0, α > 0. Then for n large enough, the Fourier coefficients of f 0 satisfy the conditions of Lemma 2.1(b). Therefore by Corollary 3.1, ∗ ≥ En (f )C2π

−1 ∗ ≥ [4(n + 1)] En (f )C2π En (f 0 )L∗2π = (1/2)(n + 1)λ Ã ! ∞ X α λ α α × exp(−A (n + 1) ) + (2m + 1) exp(−A (2m + 1) (n + 1) ) λ

m=0 α

≥ C(n + 1) exp(−A(n + 1) ). P∞ Example 3.3. f (x) = k=1 k −λ cos kx, λ > 1. Then by Lemma 3.1(a), ∗ ≥ En (f )C2π

N +1 1 sup ≥ C(n + 1)−λ+1 . 4 N ∈N (N + n)λ

(3.14)

In the following corollary, we combine upper and lower estimates of Examples 2.1, 2.2, 3.1, and 3.2.

637

ON UNIFORM APPROXIMATION OF PERIODIC FUNCTIONS

Corollary 3.2. (a) For f (x) =

P∞

sin kx k=2 k logq k ,

7

q > 0,

−q ∗ ≤ C1 log C2 log−q (n + 1) ≤ En (f )C2π (n + 1). P∞ λ α (b) For f (x) = k=1 k exp(−A k ) cos kx, λ ∈ R, A > 0, α > 0, A ∗ ) lim (En (f )C2π

−1

(n+1)−α

n→∞

= e−1 .

References [1] N.I. Akhiezer, Lectures on the Theory of Approximation, (2nd ed.), Nauka, Moscow, 1965. [Russian] [2] V.F. Babenko, S.A. Pichugov, Best linear approximation of some classes of differentiable periodic functions, Math. Notes, 27,325-329(1980). [3] M.I. Ganzburg, A lower estimate of best approximations of continuous functions, Ukranian Math. J., 41,763-767(1989). [4] M.I. Ganzburg, Best approximation of functions like |x|λ exp(−A|x|−α ), J. Approx. Theory, 92,379-410(1998). [5] N.P. Korneichuk, Exact Constants in Approximation Theory, Cambridge University Press, Cambridge, 1991. ¨ [6] B. Sz.-Nagy, Uber gewisse Extremalfragen bei transformierten trigonometrischen Entwicklungen. I. Periodischer Fall, Berichte Acad. d. Wiss., Leipzig, 90,103-134(1938). [7] D.J. Newman and T.J. Rivlin, Approximation of monomials by lower degree polynomials, Aeguat. Math. 14,451-455(1976). [8] A.F. Timan, Theory of Approximation of Functions of a Real Variable, MacMillan, New York, 1963. [9] R.M. Trigub and E.S. Belinsky, Fourier Analysis and Approximation of Functions, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2004. [10] A. Zygmund, Trigonometric Series (2nd ed.), Vol. I, Cambridge University Press, Cambridge, 1959. Department of Mathematics, Hampton University, Hampton, Virginia 23668 E-mail address: [email protected]

JOURNAL 638 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 638-644, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Some convergence theorems for a class of generalized Φ-hemicontractive mappings∗ Chang He Xiang, Zhe Chen, Ke Quan Zhao† College of Mathematics and Computer Science, Chongqing Normal University, Chongqing, 400047, P.R. China

Abstract. Suppose that E is a real normed linear space, C is a nonempty convex subset of E and T : C → C is a Lipschitz generalized Φ-hemicontractive mapping. Under suitable conditions on the iterative parameters, we show that the Mann iterative sequence with errors converges strongly to the unique fixed point of T . Key Words: φ-hemicontractive mapping; generalized Φ-hemicontractive mapping; Mann iterative sequence with errors; fixed point 2000 Mathematics Subject Classification: 47H05; 47H10; 54H25

1

INTRODUCTION

Let E be an arbitrary real normed linear space with dual space E ∗ and C be a nonempty ∗ subset of E. We denote by J the normalized duality mapping from E to 2E defined by J(x) = {x∗ ∈ E ∗ : hx, x∗ i = kxk2 = kx∗ k2 }, ∀ x ∈ E, where h·, ·i denotes the generalized duality pairing. A mapping T : C → E is called φ-hemicontractive if the fixed point set F (T ) = {x ∈ C : T x = x} is nonempty and there exists a strictly increasing function φ : [0, ∞) → [0, ∞) with φ(0) = 0 such that, for all x ∈ C and x∗ ∈ F (T ), there exists j(x − x∗ ) ∈ J(x − x∗ ) satisfying hT x − x∗ , j(x − x∗ )i ≤ kx − x∗ k2 − φ(kx − x∗ k)kx − x∗ k. T is called generalized Φ-hemicontractive if the fixed point set F (T ) is nonempty and there exists a strictly increasing function Φ : [0, ∞) → [0, ∞) with Φ(0) = 0 such that hT x − x∗ , j(x − x∗ )i ≤ kx − x∗ k2 − Φ(kx − x∗ k)

(1.1)

holds for all x ∈ C, x∗ ∈ F (T ) and for some j(x − x∗ ) ∈ J(x − x∗ ). Generalized Φ-hemicontractive mapping is also called uniformly hemicontractive in [1, 2]. It is well known that this kind of mappings play important roles in nonlinear analysis. ∗ This research is supported by the Education Committee project Research Foundation of Chongqing (Grant No. KJ070806) and Chongqing Key Laboratory of Operations Research and System Engineering. † E-mail address: [email protected] (C.H. Xiang), [email protected] (Z. Chen), [email protected] (K.Q. Zhao).

1

XIANG ET AL: CONVERGENCE THEOREMS

639

By taking Φ(s) = sφ(s), where φ : [0, ∞) → [0, ∞) is a strictly increasing function with φ(0) = 0, we know that the class of φ-hemicontractive mappings is a subset of the class of generalized Φ-hemicontractive mappings. The Example 1.1 below demonstrates that the class of Lipschitz φ-hemicontractive mappings is a proper subset of the class of Lipschitz generalized Φ-hemicontractive mappings. Example 1.1. Let E = < be the reals with the usual norm. Define T : E → E by Tx = x −

x , ∀ x ∈ E. 1 + x2

Then T is Lipschitz and T has the unique fixed point x∗ = 0 ∈ E. Let Φ : [0, ∞) → [0, ∞) s2 be defined by Φ(s) = 1+s 2 . Then Φ(s) is a strictly increasing function with Φ(0) = 0. For all x ∈ E, we have hT x − T x∗ , x − x∗ i = hT x, xi = x2 −

x2 = |x|2 − Φ(|x|) = |x − x∗ |2 − Φ(|x − x∗ |), 1 + x2

so that T is a Lipschitz generalized Φ-hemicontractive mapping. Since hT x − T x∗ , x − x∗ i = |x − x∗ |2 − Φ(|x − x∗ |) ≤ |x − x∗ |2 − φ(|x − x∗ |)|x − x∗ | s holds for all x ∈ E if and only if φ(s) ≤ 1+s 2 for all s ≥ 0, we know that T is not φhemicontractive since such a function φ is not increasing.

Many results have been proved on convergence or stability of Ishikawa iterative sequences (with errors) or Mann iterative sequences (with errors) for Lipschitz φ-hemicontractive mappings (see, e.g., [3-8] and the references therein). Recently, there are some authors studied the convergence of the iterative sequences (with errors) involving generalized Φ-hemicontractive mappings (see, e.g., [1,2,9-11] and the references therein). In 2005, Chidume and Chidume [11] proved the following results: Theorem CC1 [11, Theorem 2.2]. Let E be a real normed linear space and T : E → E be uniformly continuous. Let {xn } be a sequence in E defined iteratively from an arbitrary x0 ∈ E by xn+1 = an xn + bn T xn + cn un , n ≥ 0, where {an }, {bn }, {cn } are sequences in [0, 1] satisfying the following conditions: (i) an + bn + cn = 1, ∀ n ≥ 0; (ii)

∞ P

(bn + cn ) = ∞;

n=0

(iii)

∞ P

(bn + cn )2 < ∞;

n=0

(iv)

∞ P

cn < ∞ and such that

n=0

hT xn − x∗ , j(xn − x∗ )i ≤ kxn − x∗ k2 − Φ(kxn − x∗ k), ∀ n ≥ 0, where Φ : [0, ∞) → [0, ∞) is a strictly increasing function with Φ(0) = 0, and {un } is a bounded sequence in E. Then {xn } is bounded. Theorem CC2 [11, Theorem 2.3]. Let E be a real normed linear space, K be a nonempty subset of E and T : K → E be a uniformly continuous generalized Φ-hemicontractive mapping, 2

640

XIANG ET AL: CONVERGENCE THEOREMS

i.e., there exist x∗ ∈ F (T ) and a strictly increasing function Φ : [0, ∞) → [0, ∞), Φ(0) = 0 such that for all x ∈ K, there exists j(x − x∗ ) ∈ J(x − x∗ ) such that hT xn − x∗ , j(xn − x∗ )i ≤ kxn − x∗ k2 − Φ(kxn − x∗ k), ∀ n ≥ 0. (a) If y ∗ ∈ K is a fixed point of T , then y ∗ = x∗ and so T has at most one fixed point in K. (b) Suppose there exists x0 ∈ K, such that the sequence {xn } defined by xn+1 = an xn + bn T xn + cn un , n ≥ 0, is contained in K, where {an }, {bn } and {cn } are real sequences satisfying the following conditions: (i) an + bn + cn = 1; (ii)

∞ P

(bn + cn ) = ∞;

n=0

(iii)

∞ P

(bn + cn )2 < ∞;

n=0

(iv)

∞ P

cn < ∞; and {un } is a bounded sequence in E.

n=0

Then {xn } converges strongly to x∗ . In particular, if y ∗ is a fixed point of T in K, then {xn } converges strongly to y ∗ . The proof of conclusion (b) in Theorem CC2 is based on Theorem CC1. However, there is a gap in the proof of Theorem CC1. In fact, in the proof of Theorem CC1, in order to prove kxn − x∗ k ≤ 2Φ−1 (a0 ) for all n ≥ 0 by induction, where a0 is a positive number, by assuming that kxn − x∗ k ≤ 2Φ−1 (a0 ) and kxn+1 − x∗ k > 2Φ−1 (a0 ) hold for some n, the authors of [11] established following inequality (see [11, page 551]) kxn+1 − x∗ k2 ≤ kxn − x∗ k2 − αn Φ(2Φ−1 (a0 )) + M1 αn2 + cn ρ

(1.2)

for the same n. Unfortunately, (1.2) does not imply that Φ(2Φ−1 (a0 ))

n X j=0

αj ≤

n  X



kxj − x∗ k2 − kxj+1 − x∗ k2 + M1

n=0

∞ X j=0

αj2 + ρ

n X

cj ,

j=0

since (1.2) holds only for one given natural number n. Therefore, the result of Theorem CC2 may be not true since its proof is based on Theorem CC1. On the other hand, it has been proved in [6] that if T : K → K is a Lipschitz φhemicontractive mapping and cn = 0 for all n ≥ 0, then the conclusion (b) in Theorem CC2 holds. Since the class of Lipschitz φ-hemicontractive mappings is a proper subset of the class of Lipschitz generalized Φ-hemicontractive mappings, this leads to the following questions. Question 1. Suppose that T : C → C is a Lipschitz generalized Φ-hemicontractive mapping. Does the result in [6] hold? Question 2. Suppose that T : C → C is a Lipschitz generalized Φ-hemicontractive mapping. Does the conclusion (b) of Theorem CC2 hold? The main purpose of this paper is to give affirmative answer to Questions 1 and 2.

3

XIANG ET AL: CONVERGENCE THEOREMS

2

641

PRELIMINARIES The following lemmas will be used in the proof of our main results.

Lemma 2.1 (See, e.g., [11]). Let E be a real normed linear space. Then for all x, y ∈ E, we have kx + yk2 ≤ kxk2 + 2hy, j(x + y)i, ∀j(x + y) ∈ J(x + y). Lemma 2.2 (See, e.g., [12]). Let {an }, {bn }, {cn } be three nonnegative sequences satisfying the following condition: an+1 ≤ (1 + bn )an + cn , ∀ n ≥ n0 , where n0 is some nonnegative integer,

∞ P n=n0

bn < ∞ and

∞ P n=n0

cn < ∞. Then the limit lim an n→∞

exists. Lemma 2.3. Suppose that there exists a natural number n0 such that an , bn , cn and βn are nonnegative real numbers for all n ≥ n0 satisfying the following conditions: (i) an+1 ≤ (1 + bn )an − βn ϕ(an+1 ) + cn , ∀ n ≥ n0 , (ii)

∞ P n=n0

(iii)

∞ P n=n0

bn < ∞,

∞ P n=n0

cn < ∞,

βn = ∞,

where ϕ : [0, ∞) → [0, ∞) is a strictly increasing function with ϕ(0) = 0. Then lim an = 0. n→∞

Proof. By condition (i), we have an+1 ≤ (1 + bn )an + cn , ∀ n ≥ n0 . Using condition (ii) and Lemma 2.2, we obtain that lim an exists and so {an } is bounded. n→∞ Suppose that lim an = a and an ≤ M ( ∀ n ≥ n0 ), where a, M are nonnegative constants. n→∞

Let dn = M bn + cn . Then

∞ P n=n0

dn < ∞ by condition (ii). It follows from condition (i) that

an+1 ≤ an − βn ϕ(an+1 ) + dn ( ∀ n ≥ n0 ).

(2.1)

Now, we prove that a = 0. If a > 0, then there exists a nonnegative integer N ≥ n0 such that an+1 ≥ a2 for all n ≥ N . Since ϕ is strictly increasing, we have ϕ(an+1 ) ≥ ϕ( a2 ) > 0 for all n ≥ N . It follows from (2.1) that an+1 ≤ an − βn ϕ( a2 ) + dn ( ∀n ≥ N ) and so ∞ ∞ X a X ∞ = ϕ( ) βn ≤ aN + dn < ∞, 2 n=N n=N

which is a contradiction. Therefore, lim an = a = 0. This completes the proof. n→∞

2

Remark 2.1. Lemma 2.3 is different from Lemma 3 in [13], which requires that bn = 0 for all n ≥ 0 and cn = o(βn ).

4

642

XIANG ET AL: CONVERGENCE THEOREMS

3

MAIN RESULTS

Theorem 3.1. Let E be a real normed linear space, C be a nonempty convex subset of E and T : C → C be a Lipschitz generalized Φ-hemicontractive mapping. For given x0 ∈ C, suppose that the sequence {xn } ⊂ C is the Mann iterative sequence with errors defined by xn+1 = αn xn + βn T xn + γn un , n ≥ 0,

(3.1)

where {un } is a bounded sequence in C and {αn }, {βn }, {γn } are sequences in [0, 1] satisfying the following conditions: (1) αn + βn + γn = 1( ∀ n ≥ 0); (2)

∞ P

βn = ∞;

n=0

(3)

∞ P n=0

βn2 < ∞,

∞ P

γn < ∞.

n=0

Then {xn } converges strongly to the unique fixed point of T in C. Proof. It follows from (1.1) that F (T ) = {x ∈ C : T x = x} is singleton. Let F (T ) = {p} and M = sup{kun − pk : n ≥ 0}. Then M < ∞ and kun − xn k ≤ kun − pk + kp − xn k ≤ M + kxn − pk.

(3.2)

By (1.1), there exists j(xn − p) ∈ J(xn − p) such that hT xn − p, j(xn − p)i ≤ kxn − pk2 − Φ(kxn − pk), ∀ n ≥ 0,

(3.3)

where Φ : [0, ∞) → [0, ∞) is a strictly increasing function with Φ(0) = 0. Let L be the Lipschitz constant of T . By Lemma 1.1, (3.1) and (3.3), we obtain kxn+1 − pk2 = kαn (xn − p) + βn (T xn − p) + γn (un − p)k2 ≤ αn2 kxn − pk2 + 2βn hT xn − p, j(xn+1 − p)i +2γn hun − p, j(xn+1 − p)i ≤ αn2 kxn − pk2 + 2βn hT xn+1 − p, j(xn+1 − p)i +2βn hT xn − T xn+1, j(xn+1 − p)i + 2γn M kxn+1 − pk ≤ αn2 kxn − pk2 + 2βn kxn+1 − pk2 − Φ(kxn+1 − pk) +2βn Lkxn − xn+1 k · kxn+1 − pk + 2γn M kxn+1 − pk, ∀ n ≥ 0.

(3.4)

From (3.1) and condition (1), we have xn − xn+1 = βn (xn − T xn ) − γn (un − xn ). It follows from (3.2) that kxn − xn+1 k ≤ βn kxn − T xn k + γn (M + kxn − pk).

(3.5)

kxn − T xn k ≤ kxn − pk + kp − T xn k ≤ (1 + L)kxn − pk.

(3.6)

Observe that Taking (3.6) into (3.5), we obtain kxn − xn+1 k ≤ [(1 + L)βn + γn ] kxn − pk + M γn . 5

(3.7)

XIANG ET AL: CONVERGENCE THEOREMS

643

Taking (3.7) into (3.4), we have kxn+1 − pk2 ≤ αn2 kxn − pk2 + 2βn kxn+1 − pk2 − Φ(kxn+1 − pk) +2(δn kxn − pk + σn )kxn+1 − pk, 



(3.8)

where δn = Lβn [(1 + L)βn + γn ] , σn = (Lβn + 1)M γn , ∀ n ≥ 0. By condition (3), we have ∞ X

∞ X

δn < ∞,

n=0

σn < ∞.

(3.9)

n=0

√ Denote an = kxn − pk2 (∀ n ≥ 0) and ϕ(s) = 2Φ( s). It follows from (3.8) that √ √ an+1 ≤ αn2 an + 2βn an+1 − βn ϕ(an+1 ) + 2(δn an + σn ) an+1 , ∀ n ≥ 0. Noting that 0 ≤ αn ≤ 1 − βn , we obtain an+1 ≤ (1 − βn )2 an + 2βn an+1 − βn ϕ(an+1 ) + δn (an + an+1 ) + σn (1 + an+1 ) = (1 − 2βn + βn2 + δn )an + (2βn + δn + σn )an+1 − βn ϕ(an+1 ) + σn .

(3.10)

It follows from (3.9) and condition (3) that lim (2βn + δn + σn ) = 0. Thus, there exists a n→∞ 1 2 for

natural number n0 such that 2βn + δn + σn ≤ bn =

all n ≥ n0 . Let

1 − 2βn + βn2 + δn βn2 + 2δn + σn −1= , 1 − 2βn − δn − σn 1 − 2βn − δn − σn σn cn = . 1 − 2βn − δn − σn

By (3.10), an+1 ≤ (1 + bn )an − βn ϕ(an+1 ) + cn , ∀ n ≥ n0 . Since

1 2

≤ 1 − 2βn − δn − σn ≤ 1 for all n ≥ n0 , 0 ≤ bn ≤ 2(βn2 + 2δn + σn ), 0 ≤ cn ≤ 2σn , ∀ n ≥ n0 .

It follows from (3.9) and condition (3) that

∞ P n=n0

using Lemma 2.3 and condition (2), we obtain that lim kxn n→∞ lim kxn − pk = 0. This completes the proof of Theorem 3.1.

n→∞

∞ P

bn < ∞ and

cn n=n0 − pk2 = 2

< ∞. Therefore, by lim an = 0. That is,

n→∞

Remark 3.1. Theorem 3.1 gives an affirmative answer to Question 2. Taking γn = 0 for all n ≥ 0 in Theorem 3.1, we have the following result. Theorem 3.2. Let E be a real normed linear space, C be a nonempty convex subset of E and T : C → C be a Lipschitz generalized Φ-hemicontractive mapping. For given x0 ∈ C, suppose that the sequence {xn } ⊂ C is the Mann iterative sequence defined by xn+1 = (1 − βn )xn + βn T xn , n ≥ 0, where {βn } is a sequence in [0, 1] satisfying the following conditions: (1)

∞ P

βn = ∞;

n=0

(2)

∞ P n=0

βn2 < ∞.

Then {xn } converges strongly to the unique fixed point of T in C. Remark 3.2. Theorem 3.2 gives an affirmative answer to Question 1. 6

644

XIANG ET AL: CONVERGENCE THEOREMS

References [1] C. Moore and B.V.C. Nnoli, Iterative solution of nonlinear equations involving set-valued uniformly accretive operators, Comput. Math. Appl. 42(2001), 131-140. [2] C.E. Chidume and H. Zegeye, Approximation methods for nonlinear operator equations, Proc. Amer. Math. Soc. 131(2003), 2467-2478. [3] M.O. Osilike, Iterative solution of nonlinear equations of the φ-strongly accretive type, J. Math. Anal. Appl. 200(1996), 259-271. [4] Y.G. Xu, Ishikawa and Mann iterative processes with errors for nonlinear strongly accretive operator equations, J. Math. Anal. Appl. 224(1998), 91-101. [5] X.P. Ding, Iterative process with errors to nonlinear φ-strongly accretive operator equations in arbitrary Banach spaces, Compt. Math. Appl. 33(1997), 75-82. [6] M.O. Osilike, Iterative solution of nonlinear φ-strongly accretive operator equations in arbitrary Banach spaces, Nonlinear Anal. 36(1999), 1-9. [7] R.P. Agarwal, N.J. Huang and Y.J. Cho, Stability of iterative processes with errors for nonlinear equations of φ-strongly accretive type operators, Numer. Funct. Anal. Optimiz. 22(2001), 471-485. [8] Z.Y. Huang, Weak stability of Mann and Ishikawa iterations with errors for φhemicontractive operators, Appl. Math. Lett. 20(2007), 470-475. [9] S.S. Zhang, On the convergence problems of Ishikawa and Mann iteration process with errors for Φ-pseudo contractive type mappings, Appl. Math. Mech. 21(2000), 1-12. [10] F. Gu, Convergence theorems for Φ-pseudocontractive type mappings in normed linear spaces, Northeast Math. J. 17(2001), 340-346. [11] C.E. Chidume and C.O. Chidume, Convergence theorems for fixed points of uniformly continuous generalized Φ-hemi-contractive mappings, J. Math. Anal. Appl. 303 (2005), 545-554. [12] S.S. Chang, K.K. Tan, H.W.J. Lee and C.K. Chan, On the convergence of implicit iteration process with error for a finite family of asymptotically nonexpansive mappings, J. Math. Anal. Appl. 313 (2006), 273-283. [13] C.E. Chidume and E.U. Ofoedu, A new iteration process for generalized Lipschitz pseudocontractive and generalized Lipschitz accretive mappings, Nonlinear Anal. 67 (2007), 307315.

7

JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 645-655, 2010, COPYRIGHT 2010 EUDOXUS PRESS, 645 LLC

ITERATIVE ALGORITHMS FOR A COUNTABLE FAMILY OF NONEXPANSIVE MAPPINGS YISHENG SONG AND XIAO LIU COLLEGE OF MATHEMATICS AND INFORMATION SCIENCE, HENAN NORMAL UNIVERSITY, P.R. CHINA, 453007. Abstract. In this paper, strong convergence of Modified Mann type iteration is used to find some common fixed point of a countable family {Tn }+∞ n=1 of nonexpansive mappings in Banach space, and their proof is different from ones of Aoyama et al.[Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space, Nonlinear Analysis, 67(2007) 2350–2360] and other existing results which is independent of the convergence of the implicit anchor-like continuous path zt , defined by zt = tu + (1 − t)T zt . Key Words and Phrases: Countable family of nonexpansive mappings; Modified Mann type iteration; Mann type iteration; uniformly Gˆ ateaux differentiable.

1. Introduction Let E be a Banach space and E ∗ be its dual space. Let K be a nonempty closed convex subset of E and T : K → K be a mapping. T is said to be non-expansive if kT x − T yk ≤ kx − yk for all x, y ∈ K. In order to find a fixed point of nonexpansive mapping T , Mann [14] and Halpern [7] respectively introduced the iteration procedure in a Hilbert space as follows (We refer them to as Mann iteration and Halpern iteration): xn+1 = (1 − αn )T xn + αn xn

(1.1)

xn+1 = (1 − αn )T xn + αn u,

(1.2)

and where {αn } is a sequences in [0, 1]. Subsequently, Mann iteration and Halpern iteration ware studied extensively over the last twenty years for constructions of fixed points of nonlinear mappings and of solutions of nonlinear operator equations involving monotone, accretive and pseudocontractive operators. For example, [3, 4, 11, 12, 13, 16, 19, 20, 21, 26, 27, 28, 31, 32, 33, 35, 36] and many other results which isn’t mentioned here. The modified version of Mann iteration and Halpern iteration were investigated widely by many mathematic workers. For example, Kim-Xu [10] and ChidumeChidume [5] dealt with the strong convergence of the following iterative scheme (so-called Modified Mann iteration) for a non-expansive mapping T : for x0 , u ∈ K, xn+1 = αn u + (1 − αn )(βn xn + (1 − βn )T xn ). 1991 Mathematics Subject Classification. 47H06, 47J05, 47J25, 47H10, 47H17. Email: [email protected], [email protected]. 1

(1.3)

646

2

Y. SONG, X. LIU

where αn , βn ∈ [0, 1]. Song-Chen [25] researched strong convergence of Modified Mann iteration (1.3) in the frame of reflexive Banach space which is complementary and development of the above results. Recently, for a nonexpansive mappings sequence {Tn }+∞ n=1 with some special condition, Jung [9] and O’Hara et al. [17, 18] respectively studied strong convergence of the following iteration: for x0 , u ∈ K, xn+1 = αn u + (1 − αn )Tn xn ,

(1.4) P∞ where αn ∈ [0, 1] such that (C1) limn→∞ αn = 0 and (C2) n=1 αn = ∞. Unfortunately, there was a gap in the proof lines of their main results. With the purpose of overcoming the gap, Song-Chen [29, 30] introduced the conception of a uniformly asymptotically regular for {Tn }+∞ n=1 and proved several strong convergence results by using the conception. Other investigation of approximating common fixed point for countable family of nonexpansive mappings by means of the iteration (1.4) can be found in Refs [1, 22, 24] and many results which isn’t cited here. Very recently, still for a nonexpansive mappings sequence {Tn }+∞ n=1 with some specific condition, Aoyama et al.[2] introduced Mann type iteration procedure: let x1 ∈ K and xn+1 = αn xn + (1 − αn )Tn xn , (1.5) where αn ∈ [a, b] ⊂ [0, 1], and showed its strong and weak convergence in uniformly convex Banach space. At the same time, Song [23] also carefully researched the convergence of the iteration (1.5) by the aid of the uniformly asymptotically regular of {Tn }+∞ n=1 in a reflexive Banach space. In this paper, for a countable family {Tn }+∞ n=1 of nonexpansive mappings with some appropriate condition (see section 3), we will introduce the following iteration procedure: let x1 , u ∈ K and xn+1 = αn u + βn xn + (1 − αn − βn )Tn xn ,

(1.6)

and show its strong convergence in Banach space when αn , βn ∈ [0, 1] satisfy the ∞ P conditions lim αn = 0, αn = ∞ and 0 < lim inf βn ≤ lim sup βn < 1. We also n→∞

n=0

n→∞

n→∞

go on exploring the weak convergence of the Mann type iteration (1.5) under the condition 0 < lim inf αn ≤ lim sup αn < 1. n→∞

n→∞

2. Preliminaries and basic results Throughout this paper, the fixed point set of T is denoted by F (T ) := {x ∈ K; T x = x}. Let E be a real Banach space and let J denote the normalized duality ∗ mapping from E into 2E given by J(x) = {f ∈ E ∗ , hx, f i = kxkkf k, kxk = kf k}, ∀ x ∈ E, where E ∗ is the dual space of E and h·, ·i denotes the generalized duality pairing. ∗ We write xn * x (respectively xn * x) to indicate that the sequence xn weakly ∗ (respectively weak ) converges to x; as usual xn → x will symbolize strong convergence. In order to show our main results, the following conceptions and lemmas are needed. Let S(E) := {x ∈ E; kxk = 1} denote the unit sphere of a Banach space E. A Banach space E is said to have

647

COUNTABLE FAMILY OF NONEXPANSIVE MAPPINGS

3

(i) a Gˆ ateaux differentiable norm (we also say that E is smooth), if the limit lim

t→0

kx + tyk − kxk t

(∗)

exists for each x, y ∈ S(E); (ii) a uniformly Gˆ ateaux differentiable norm, if for each y in S(E), the limit (∗) is uniformly attained for x ∈ S(E); (iii) a Fr´echet differentiable norm, if for each x ∈ S(E), the limit (∗) is attained uniformly for y ∈ S(E); (iv) a uniformly Fr´echet differentiable norm (we also say that E is uniformly smooth), if the limit (∗) is attained uniformly for (x, y) ∈ S(E) × S(E). (v) fixed point property for nonexpansive self-mappings , if each nonexpansive self-mapping defined on any bounded closed convex subset K of E has at least a fixed point. A Banach space E is said to be (vi) strictly convex if kx + yk < 1; 2 (vii) uniformly convex if for all ε ∈ [0, 2], ∃δε > 0 such that kxk = kyk = 1, x 6= y implies

kx + yk < 1 − δε whenever kx − yk ≥ ε. 2 (viii) The subset K of E is a Chebyshev set, if ∀x ∈ E, there exactly exists unique element y ∈ K such that kx − yk = d(x, K) = inf{kx − zk; z ∈ K}. The following results is well known which are found in reference[8, 34, 15]: the normalized duality mapping J in a Banach space E with a uniformly Gˆateaux differentiable norm is single-valued and strong-weak* uniformly continuous on any bounded subset of E; each uniformly convex Banach space E is reflexive and strictly convex and has fixed point property for nonexpansive self-mappings; every uniformly smooth Banach space E is a reflexive Banach space with a uniformly Gˆateaux differentiable norm and has fixed point property for nonexpansive selfmappings; every nonempty closed convex subset is a Chebyshev set in strictly convex and reflexive Banach space (see [15, Corollary 5.1.19]); Each weakly compact convex subset is a Chebyshev set in strictly convex Banach space E (see [8, Lemma 9.3.7]). Lemma 2.1 ([33, Lemma 2.2]) Let {xn } and {yn } be two bounded sequences in a Banach space E and βn ∈ [0, 1] with 0 < lim inf βn ≤ lim sup βn < 1. Suppose kxk = kyk = 1 implies

n→∞

xn+1 = βn xn + (1 − βn )yn for all integers n ≥ 1 and

n→∞

lim sup(kyn+1 − yn k − kxn+1 − xn k) ≤ 0. n→∞

Then lim kxn − yn k = 0. n→∞

Lemma 2.2 (see [8, Lemma 9.3.6]) Let C be a weakly compact subset in Banach space E and let f : E → R be a weakly lower semi-continuous function. Then the infimum of f is achieved in C. In the proof of our main theorems, we also need the following definitions and results. Let µ be a continuous linear functional on l∞ satisfying kµk = 1 = µ(1). Then we know that µ is a mean on N if and only if inf{an ; n ∈ N } ≤ µ(a) ≤ sup{an ; n ∈ N }

648

4

Y. SONG, X. LIU

for every a = (a1 , a2 , · · · ) ∈ l∞ . According to time and circumstances, we µn (an ) instead of µ(a). A mean µ on N is called a Banach limit if µn (an ) = µn (an+1 ) ∞

for every a = (a1 , a2 , · · · ) ∈ l . Furthermore, we know the following results [35, 34]. Lemma 2.3 ([35, Lemma 1]) Let C be a nonempty closed convex subset of Banach space E with uniformly Gˆ ateaux differentiable norm. Let {xn } be a bounded sequence of E and let µn be a mean µ on N and z ∈ C. Then µn kxn − zk2 = min µn kxn − yk2 y∈C

if and only if µn hy − z, J(xn − z)i ≤ 0, ∀y ∈ C.

Lemma 2.4 ([32, Proposition 2]) Let α is a real number and (x0 , x1 , . . .) ∈ l∞ such that µn xn ≤ α for all Banach Limits. If lim sup(xn+1 − xn ) ≤ 0, then n→∞

lim sup xn ≤ α. n→∞

Lemma 2.5 ([36]) Let {an } be a sequence of nonnegative real numbers satisfying the property an+1 ≤ (1 − γn )an + γn βn , n ≥ 0, where {γn } ⊂ (0, 1) and {βn } ⊂ R such that ∞ P (i) γn = ∞; (ii) lim sup βn ≤ 0. n→∞

n=0

Then {an } converges to zero, as n → ∞. 3. Strong convergence of the modified Mann type iteration Let K be a nonempty closed convex subset of Banach space E. Suppose {Tn } (n = 1, 2, . . .) is a countable family of nonexpansive mappings from K into itself such that ∞ T F := F (Tn ) 6= ∅. Recently, in uniformly convex Banach space, Aoyama et al.[1] n=1

obtained the strong convergence of modified Halpern type iteration (1.4) if {Tn }+∞ n=1 and {αn }+∞ n=1 ⊂ (0, 1] satisfy the following conditions: ∞ P (B1) sup kTn+1 x − Tn xk < +∞ for any bounded subset C of K; n=0 x∈C

(C1) lim αn = 0 and n→∞

αn+1 n→∞ αn

(C2) either lim

∞ P n=0

αn = ∞;

= 1 or

∞ P n=0

|αn+1 − αn | < +∞.

Their proof also depend on the following important fact((B1) implies (B2), see [1, Lemma 3.2]): (B2) for any bounded subset C of K, there exists a nonexpansive mapping T of K into itself such that lim sup kT x − Tn xk = 0 and F (T ) = F.

n→∞ x∈C

649

COUNTABLE FAMILY OF NONEXPANSIVE MAPPINGS

5

In this section, we introduce the following modified Mann type iteration: for x1 , u ∈ K, xn+1 = αn u + βn xn + (1 − αn − βn )Tn xn , (3.1) and show its strong convergence under the conditions (C1) and (B2) along with (C3) 0 < lim inf βn ≤ lim sup βn < 1. n→∞

n→∞

With the purpose of proving main results, we first show the following lemma which doesn’t depend on the convergence of the implicit anchor-like continuous path zt = tu + (1 − t)T zt as many existent results (see [5, 7, 10, 32, 35, 36] and so on). Lemma 3.1 Let K be either a nonempty weakly compact convex subset of a strictly convex Banach space E or a nonempty closed convex subset of a reflexive Banach space E with fixed point property for nonexpansive self-mappings. Assume that T : K → K is a nonexpansive mapping with F (T ) 6= ∅ and a bounded sequence {xn } of K satisfies lim kxn+1 − T xn k = 0 and lim kxn − xn+1 k = 0.

n→∞

n→∞

Suppose that E has a uniformly Gˆ ateaux differentiable norm. Then there exists x∗ ∈ F (T ) such that lim suphu − x∗ , J(xn+1 − x∗ )i ≤ 0 for each u ∈ K. n→∞

Proof. Let g(x) = µn kxn − xk2 , ∀x ∈ K. Then g(x) is continuous and convex on K. Define a set K1 = {x ∈ K; g(x) = inf g(y)}. y∈K

From Lemma 2.2 or the reflexivity of E and the property of g(x) together with the boundedness of {xn }, we obtain K1 is a nonempty bounded closed convex subset of K and hence weak compact. For ∀x ∈ K1 , then g(T x)

= µn kxn − T xk2 = µn kxn+1 − T xk2 ≤ µn (kxn+1 − T xn k + kT xn − T xk)2 ≤ µn kxn − xk2 = g(x).

Hence, T x ∈ K1 . Namely, T (K1 ) ⊂ K1 . Case 1. Assumed that E is strictly convex. Taking y ∈ F (T ), then there exists unique x∗ ∈ K1 such that ky − x∗ k = d(y, K1 ) = inf ky − xk. x∈K1

By T x∗ ∈ K1 , we have ky − T x∗ k = kT y − T x∗ k ≤ ky − x∗ k. Hence x∗ = T x∗ by the uniqueness of x∗ in K1 . Case 2. Assumed that E has fixed point property for nonexpansive self-mappings. By T (K1 ) ⊂ K1 , there exists x∗ ∈ K1 such that x∗ = T x∗ . Using Lemma 2.3 and the definition of K1 , we get that for u ∈ K, µn hu − x∗ , J(xn − x∗ )i ≤ 0.

650

6

Y. SONG, X. LIU

On the other hand, as limn→∞ kxn+1 − xn k = 0 together with the norm-weak∗ uniformly continuity of the duality mapping J in Banach space with a uniformly Gˆateaux differentiable norm, we have lim (hu − x∗ , J(xn+1 − x∗ )i − hu − x∗ , J(xn − x∗ )i) = 0.

n→∞

Hence, the sequence {hu − x∗ , J(xn − x∗ )i} satisfies the conditions of Lemma 3.4. As a result, we must have lim suphu − x∗ , J(xn+1 − x∗ )i ≤ 0. n→∞

Theorem 3.2 Let E be a reflexive Banach space with a uniformly Gˆ ateaux differentiable norm and having fixed point property for nonexpansive self-mappings. Assume that K is a nonempty closed convex subset of E and {Tn }+∞ n=1 is a countable ∞ T family of nonexpansive mappings from K into itself such that F := F (Tn ) 6= n=1

∅ and the condition (B2). Let {αn } and {βn } be two real number sequences in [0, 1] satisfying (C1) and (C3), respectively. Then the modified Mann type iteration sequence {xn }, defined by (3.1) strongly converges to some point of F . Proof. At first, we show that {xn } is bounded. Taking p ∈ F (T ), we have kxn+1 − pk ≤ (1 − αn − βn )kTn xn − pk + βn kxn − pk + αn ku − pk ≤ (1 − αn − βn )kxn − pk + βn kxn − pk + αn ku − pk ≤ max{kxn − pk, ku − pk} .. . ≤ max{kx0 − pk, ku − pk}. Thus, {xn } is bounded, and hence so is {Tn xn } by kTn xn − pk ≤ kxn − pk. Next we prove that lim kxn+1 − T xn k = 0 and lim kxn − xn+1 k = 0.

n→∞

Indeed, let λn =

n→∞

αn 1−βn

(3.2)

and zn = λn u + (1 − λn )Tn xn . Then

lim λn = 0 and xn+1 = βn xn + (1 − βn )zn .

n→∞

(3.3)

Therefore, for some constant M such that M > max{kuk, sup kTn xn k} and any n∈N

bounded subset C of K containing {xn }, we have kzn+1 − zn k = kλn+1 u + (1 − λn+1 )Tn+1 xn+1 − (λn u + (1 − λn )Tn xn )k ≤ |λn+1 − λn |kuk + kTn+1 xn+1 − Tn xn k +λn kTn xn k + λn+1 kTn+1 xn+1 k ≤ |λn+1 − λn |kuk + kTn+1 xn+1 − Tn+1 xn k + kTn+1 xn − T xn k +kT xn − Tn xn k + (λn + λn+1 )M ≤ kxn+1 − xn k + (|λn+1 − λn | + λn + λn+1 )M + sup kTn+1 x − T xk + sup kT x − Tn xk. x∈C

x∈C

Thus, by assumption (B2) and (3.3), we have ≤

lim supn→∞ (kzn+1 − zn k − kxn+1 − xn k) lim sup kTn+1 x − T xk + lim sup kT x − Tn xk n→∞ x∈C

n→∞ x∈C

+ lim (|λn+1 − λn | + λn + λn+1 )M = 0. n→∞

651

COUNTABLE FAMILY OF NONEXPANSIVE MAPPINGS

7

By Lemma 2.1, we obtain lim kxn − zn k = 0

n→∞

and hence lim kxn+1 − xn k = lim (1 − βn )kxn − zn k = 0.

n→∞

n→∞

Since kxn+1 − T xn k ≤ kxn+1 − Tn xn k + kTn xn − T xn k ≤ kxn+1 − xn k + kxn − zn k + kzn − Tn xn k + sup kT x − Tn xk x∈C

≤ kxn+1 − xn k + kxn − zn k + λn ku − Tn xn k + sup kT x − Tn xk, x∈C

then lim kxn+1 − T xn k = 0.

n→∞ ∗

By Lemma 3.1, there exists x ∈ F (T ) such that lim suphu − x∗ , J(xn+1 − x∗ )i ≤ 0. n→∞

(3.4)

Finally, we show that xn → x∗ (n → ∞). In fact, kxn+1 − x∗ k2 = (1 − αn − βn )hTn xn − x∗ , J(xn+1 − x∗ )i + βn hxn − x∗ , J(xn+1 − x∗ )i +αn hu − x∗ , J(xn+1 − x∗ )i ∗ 2 ∗ 2 ∗ 2 ∗ 2 n+1 −x )k n+1 −x )k + βn kxn −x k +kJ(x ≤ (1 − αn − βn ) kTn xn −x k +kJ(x 2 2 +αn hu − x∗ , J(xn+1 − x∗ )i ∗ 2 ∗ 2 k ≤ (1 − αn ) kxn −x + kxn+12−x k + αn hu − x∗ , J(xn+1 − x∗ )i 2 Therefore, kxn+1 − x∗ k2 ≤ (1 − αn )kxn − x∗ k2 + 2αn hu − x∗ , J(xn+1 − x∗ )i.

(3.5)

By the condition (C1), now we apply Lemma 2.5 to yield lim kxn − x∗ k = 0.

n→∞

The proof is completed. Using the same proof techniques as Theorem 3.2, the following is obtained easily. Since the proof is a repeating work, we omit it. Theorem 3.3 Let E be a strictly convex Banach space with a uniformly Gˆ ateaux differentiable norm. Assumed that K is a nonempty weakly compact convex subset of E and {Tn }+∞ n=1 is a countable family of nonexpansive mappings from K into ∞ T itself such that F := F (Tn ) 6= ∅ and the condition (B2). Let {αn } and {βn } be n=1

two real number sequence in [0, 1] satisfying (C1) and (C3), respectively. Then the modified Mann type iteration sequence {xn }, defined by (3.1) strongly converges to some point of F . Theorem 3.4 Let E be a reflexive and strictly convex Banach space with a uniformly Gˆ ateaux differentiable norm and K be a nonempty closed convex subset of E. Suppose that {Tn }+∞ n=1 is a countable family of nonexpansive mappings from ∞ T K into itself such that F := F (Tn ) 6= ∅ and the condition (B2). Let {αn } and n=1

652

8

Y. SONG, X. LIU

{βn } be two real number sequence in [0, 1] satisfying (C1) and (C3), respectively. Then the modified Mann type iteration sequence {xn }, defined by (3.1) strongly converges to some point of F . Proof. Using the same argumentation as Theorem 3.2, we can show that {xn } is bounded and (3.2) holds. Since every nonempty closed convex subset is a Chebyshev set in a strictly convex and reflexive Banach space(see [15, Corollary 5.1.19]), then the conclusion of Lemma 3.1 holds also. The desired ultimateness is reached. Remark 1 (i) There are many spaces which has the fixed point property for nonexpansive self-mappings. For example, uniformly convex Banach space, uniformly smooth Banach space, reflexive Banach space with normal structure, Banach space with Opial’s condition and so on. (ii) We remark that Theorem 3.3 is independent of Theorem 3.2. On the one hand, it is easy to find examples of spaces which satisfies the fixed point property for nonexpansive self-mappings, which are not strictly convex. On the other hand, it appears to be unknown whether a weakly compact convex subset of strictly convex Banach space has the fixed point property for nonexpansive self-mappings. (iii) In the above theorems, not only the condition (B2) is weaker than (B1), but also the proof is different from ones of Aoyama et al.[1] which isn’t dependent upon the convergence of the implicit anchor-like continuous path zt , defined by zt = tu + (1 − t)T zt . 4. Weak convergence of Mann type iteration Recall that A Banach space E is said to satisfy Opial’s condition [16] if for any sequence {xn } in E, xn * x (n → ∞) implies lim sup kxn − xk < lim sup kxn − yk, ∀y ∈ E with x 6= y. n→∞

n→∞

p

Hilbert spaces and l (l < p < ∞) satisfy Opial’s condition and Banach spaces with a weakly sequentially continuous duality mapping satisfies Opial’s condition [6]. We now show weak convergence of Mann type iteration (1.5) which extend the main result of Aoyama et al.[2] from uniformly convex Banach space to reflexive Banach space. Theorem 4.1 Let E be a reflexive Banach space satisfying Opial’s condition and K be a nonempty closed convex subset of E. Suppose {Tn } (n = 1, 2, . . .) is a countable family of nonexpansive mappings from K into itself satisfying the ∞ T condition (B2) and F := F (Tn ) 6= ∅. Let {xn } be a sequence of Mann type n=1

iteration defined by (1.5) and αn ∈ [0, 1] satisfy 0 < lim inf αn ≤ lim sup αn < 1. n→∞

n→∞

Then {xn } weakly converges to some point of F . Proof. Take p ∈ F . We have kxn+1 − pk ≤ (1 − αn )kxn − pk + αn kTn xn − pk ≤ (1 − αn )kxn − pk + αn kxn − pk ≤ kxn − pk.

653

COUNTABLE FAMILY OF NONEXPANSIVE MAPPINGS

9

Then {kxn − pk} is a decreasing sequence, and hence limn→∞ kxn − pk exists for each p ∈ F and {xn } is bounded. Let C be any bounded subset of K containing {xn }, then kTn+1 xn+1 − Tn xn k ≤

kTn+1 xn+1 − Tn+1 xn k + kTn+1 xn − T xn k +kT xn − Tn xn k ≤ kxn+1 − xn k + sup kTn+1 x − T xk x∈C

+ sup kT x − Tn xk. x∈C

It follows from the hypothesis that lim sup(kTn+1 xn+1 − Tn xn k − kxn+1 − xn k) ≤

n→∞

lim sup kTn+1 x − T xk + lim sup kT x − Tn xk = 0.

n→∞ x∈C

n→∞ x∈C

By Lemma 2.1, we obtain lim kxn − Tn xn k = 0.

n→∞

Thus, we have kxn − T xn k ≤ kxn − Tn xn k + kTn xn − T xn k ≤ kxn − Tn xn k + lim sup kT x − Tn xk = 0. n→∞ x∈C

Hence, lim kxn − T xn k = 0.

n→∞

Since E is reflexive, there exists a subsequence {xnk } of {xn } such that xnk * x∗ for some x∗ ∈ K. Then x∗ ∈ F . In fact, suppose not. Then the Opial’s property of E implies the following: lim sup kxnk − x∗ k < lim sup kxnk − T x∗ k k→∞

k→∞

≤ lim sup(kxnk − T xnk k + kT xnk − T x∗ k) =

k→∞

lim sup kxnk − x∗ k. k→∞

This gets a contradiction. Hence x∗ = T x∗ ∈ F . Next we show xn * x∗ . Suppose not. There exists another subsequence {xni } of {xn } such that xni * x 6= x∗ . Then, we also have x = T x. From Opial’s property, we have lim kxn − xk = lim sup kxni − xk n→∞

i→∞

< lim sup kxni − x∗ k = lim sup kxnk − x∗ k i→∞

k→∞

< lim sup kxnk − xk = lim kxn − xk. k→∞

n→∞

Which gets a contradiction. So the conclusion of the theorem follows. Remark 2. It isn’t known whether the assumption (B2) can be displaced by the weaker condition (B3). (B3) For any bounded subset C of K, there exists a nonexpansive mapping T of K into itself and a subsequence {Tni } of {Tn } such that lim sup kT x − Tni xk = 0 and F (T ) = F.

i→∞ x∈C

654

10

Y. SONG, X. LIU

References 1. K. Aoyama, Y. Kimura, W. Takahashi and M. Toyoda, Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space, Nonlinear Analysis, 67(2007) 2350–2360. 2. K. Aoyama, Y. Kimura, W. Takahashi and M. Toyoda, Finding common fixed points of a countable family of nonexpansive mappings in a Banach space, Scientiae Mathematicae Japonicae, 66(2007) 89–99 :e2007, 325–335. 3. F.E. Browder, Fixed point theorems for noncompact mappings in Hilbert space, Proc. Nat. Acad. Sci. U.S.A. 53 (1965) 1272-1276. 4. F.E. Browder, Nonlinear operators and nonlinear equations of evolution in Banach spaces, Nonlinear Functional Analysis (Proc. Sympos. PureMath., Vol. 18, Part 2, Chicago, Ill., 1968), American Mathematical Society, Rhode Island, 1976, pp. 1–308. 5. C. E. Chidume and C. O. Chidume, Iterative approximation of fixed points of nonexpansive mappings, J. Math. Anal Appl. 318(2006) 288–295. 6. J.P. Gossez and E.L. Dozo, Some geometric properties related to the fixed point theory for nonexpansive mappings, Pacfic J. Math. 40(1972),565–573. 7. B. Halpern, Fixed points of nonexpansive maps, Bull. Amer. Math. Soc. 73(1967) 957–961. 8. V.I. Istratescu, Fixed Point Theory: An Introduction, Published by D.Reidel Publishing Company (1981), The Netherlands. 9. J.S. Jung, Iterative approaches to common fixed points of nonexpansive mappings in Banach spaces, J. Math. Anal. Appl. 302(2005) 509-520. 10. T.H. Kim and H.K. Xu, Strong convergence of modified Mann iterations, Nonlinear Anal. 61(2005) 51–60. 11. W. A. Kirk and S. Massa, Remarks on asymptotic and Chebyshev centers, Houston J. Math. 16 (1990), no. 3, 357–364. 12. W. A. Kirk, Transfinte methods in metric fixed point theorey. Abstract and Applied Analysis, 2003:5(2003) 311–324. 13. T. C. Lim, Remarks on some fixed point theorems, Proc. Amer. Math. Soc, 60 (1976), 179–182. 14. W.R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc. 4(1953) 506–510. 15. R. E. Megginson, An introduction to Banach space theory , 1998 Springer-Verlag New Tork, Inc. 16. Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc, 73 (1967), 591–597. 17. J. G. O’Hara, P. Pillay and H.K. Xu, Iterative approaches to finding nearest common fixed point of nonexpansive mappings in Hilbert spaces, Nonlinear Analysis, 54(2003) 1417–1426. 18. J. G. O’Hara, P. Pillay and H.K. Xu, Iterative Approaches to Convex Feasibility Problem in Banach Space, Nonlinear Analysis, 64(2006) 2022–2042. 19. S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal Appl. 75(1980) 287-292. 20. Y. Song, A new sufficient condition for the strong convergence of Halpern type iterations, Applied Mathematics and Computation, 198(2)(2008), 721–728. 21. Y. Song, Yisheng Song, On a Mann type implicit iteration process for continuous pseudocontractive mappings, Nonlinear Analysis, 67(2007) 3058–3063. 22. Y. Song, Iterative selection methods for the common fixed point problems in a Banach space, Applied Mathematics and Computation, 193(2007) 7–17. 23. Y. Song, Weak and strong convergence of Mann’s-type iterations for a countable family of nonexpansive mappings, Journal of the Korean Mathematical Society (Accepted). 24. Y. Song, Iterative approximation of a countable family of nonexpansive mappings, Applicable Analysis, 86(2007), 1329–1337. 25. Y. Song and R. Chen, Strong convergence of an iterative method for non-expansive mappings, Mathematische Nachrichten, 281(8)(2008), 1–9. 26. Y. Song and Y.J. Cho, Iterative approximations for multivalued nonexpansive mappings in reflexive Banach spaces, Math. Inequal. Appl.(Accepted). 27. Y. Song and R. Chen, Strong convergence theorems on an iterative method for a family of finite nonexpansive mappings, Applied Mathematics and Computation, 180 (2006) 275–287. 28. Y. Song and R. Chen, Viscosity approximation methods for nonexpansive nonself-mappings, J. Math. Anal. Appl. 321(2006), 316–326.

655

COUNTABLE FAMILY OF NONEXPANSIVE MAPPINGS

11

29. Y. Song and R. Chen, Iterative approximation to common fixed points of nonexpansive mapping sequences in reflexive Banach spaces, Nonlinear Analysis, 66 (2007) 591–603. 30. Y. Song, R. Chen and H. Zhou, Viscosity approximation methods for nonexpansive mapping sequences in Banach spaces, Nonlinear Analysis, 66(2007) 1016–1024. 31. Y. Song and S. Xu, Strong convergence theorems for nonexpansive semigroup in Banach spaces, J. Math. Anal. Appl., 338(2008) 152–161. 32. N. Shioji and W. Takahashi , Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces, Proc. Amer. Math. Soc., 125(1997) 3641–3645. 33. T. Suzuki, Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces, Fixed Point Theory and Applications, 2005(1)(2005) 103–123. doi:10.1155/FPTA.2005.103 34. W. Takahashi , Nonlinear Functional Analysis– Fixed Point Theory and its Applications, Yokohama Publishers inc, Yokohama, 2000(English). 35. W. Takahashi and Y. Ueda, On Reich’s strong convergence for resolvents of accretive operators, J. Math. Anal. Appl. 104(1984) 546–553. 36. H.K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc. 66 (2002) 240256.

JOURNAL 656 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 656-667, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

Statistical Convergence and Statistical Core of Sequences of Bounded Linear Operators A. GÖKHAN F¬rat University, Faculty of Science, Department of Mathematics, 23119, Elaz¬g¼, Turkey. E-mail:agokhan1@…rat.edu.tr. Abstract: In this study, we introduce a notion of uniform, strong and weak statistical convergence of sequences of bounded linear operators. We also give the relations between these convergences and uniform operator convergence, strong operator convergence, weak operator convergence. Furthermore we introduce the concept of the statistical Cauchy sequence for sequences of bounded linear operators and prove that it is equivalent to statistical convergence of sequences of bounded linear operators. In addition to these results we study statistical core for bounded linear operators and give some inequalities. Key Words: Operator sequence; Uniform operator convergence; Strong operator convergence; Weak operator convergence; Statistical convergence; Pointwise statistical convergence,Core theorems and Matrix Transformations. MSC 2000: 40A05,40C05 1. Introduction The idea of statistical convergence was introduced by Fast [2] and Schoenberg [7] independently. Later on it was studied by Fridy [3], Salat [6], and Tripathy [8] and many others. Gökhan and Güngör [5] de…ned pointwise statistical convergence of sequences of real-valued functions. Connor [1] gave an extension of the notion of statistical convergence where the asymptotic density in replaced by a …nitely additive set function : Let N be the space of natural numbers. For each E N, let Kn (E) be the cardinality of the set E \ [0; n]. The asymptotic (or natural) density of E is given by (E) = lim Knn(E) whenever the limit exists. Clearly …nite n!1

sets have zero density, (E c ) = (N E) = 1 (E), whenever both sides exist, where E c is the complement of the set E in N. We say that a real number sequence (xn ) is statistical convergent to ` provided that for every " > 0, (fk 2 N : jxk `j "g) = 0 or , (fk 2 N : jxk `j < "g) = 1 for every " > 0 in which case we write st-limxk =`. Let X and Y be normed spaces and B(X; Y ) be the normed spaces of all bounded linear operators from X into Y with the usual operator norm. As we know, a sequence of operators Tk 2 B(X; Y ) tends to a limit T , where T : X ! Y is an operator, if given " > 0; we can …nd an integer k0 such that i) kTk T k < "; for all k > k0; ii) kTk x T xk < "; for all k > k0 and for every x 2 X, 1

GOKHAN: STATISTICAL CONVERGENCE, STATISTICAL CORE

iii)kf (Tk x) f (T x)k < "; for all k > k0; for every x2 X and f 2 Y p ; where Y p denotes the set of all bounded linear functional on Y . Then T is called the uniform operator limit , strong operator limit and weak operator limit of (Tk ), respectively. It is well known that (i) =) (ii) =) (iii): 2. Statistical Convergence of Sequences of Operators Some operator sequences does not convergence in above convergence modes but its might converge in a weaker sense. Therefore, in the present paper, we introduce a notion of uniform, strong and weak statistical convergence of sequences of bounded linear operators. Throughout the paper, (Tk ) will denote a sequence of operators Tk 2 B(X; Y ). De…nition 2.1. The sequence (Tk ) is said to be uniformly statistical operator convergent to T , if for every " > 0, there exists such a set E N that (E) = 1 and 9k0 (") 2 E 3 8k > k0 and k 2 E, kTk T k < "; i.e. for every " > 0; lim n1 jfk n : jjTk T jj "gj = 0. n!1

st:

In this case we write st-limTk = T or Tk ! T , where T is an operator from X into Y . De…nition 2.2. The sequence (Tk ) is said to be strongly statistical operator convergent to T , if for 8 " > 0 and for 8 x 2 X, there exists such a set Ex N that (Ex ) = 1 and 9 k0 2 Ex 3 8k > k0 and k 2 Ex , kTk x T xk < "; i.e. for every " > 0 and for every x 2 X, lim n1 jfk n : jjTk x T x jj " gj = 0. n!1

st:

In this case, we write st-limTk x = T x or Tk x ! T x on Y for every x 2 X, where T is an operator from X into Y . De…nition 2.3. The sequence (Tk ) is said to be weakly statistical operator convergent to T , if 8 " > 0; 8x 2 X and 8f 2 Y p ;there exists such a set Ex;f N that (Ex;f ) = 1 and 9k0 2 Ex;f 3 8k>k0 and k 2 Ex;f , jf (Tk x) f (T x)j 0 and for every x 2 X and every f 2 Y p ; lim n1 jfk n : jf (Tk x) f (T x)j " g= 0 n!1

w:st:

In this case, we write w.st-limTk x = T x or Tk x ! T x on Y for every x 2 X, where T is an operator from X into Y . Since the proof of the following theorems is obvious, we merely state its and omit its. Theorem 2.1: Let (Tk ) and (Sk ) be two sequences of operators.

2

657

658

GOKHAN: STATISTICAL CONVERGENCE, STATISTICAL CORE

i) If Tk x ! T x and Sk x ! Sx on Y for every x 2 X, then Tk x + Sk x ! T x + Sx on Y for every x 2 X, where ; 2 R. ii) If Tk x ! T x on Y for every x 2 X, then jjTk xjj ! jjT xjj for every x 2 X, where =st. or w.st. Theorem 2.2: Let (Tk ) and (Sk ) be two sequences of operators. st: st: st: i) If Tk ! T and Sk ! S, then Tk + Sk ! T + S , where , 2 R. st: st: ii) If Tk ! T , then jjTk jj ! jjT jj. 3. Relations between Modes of Convergence It is not di¢ cult to show that st: i) Tk ! T ) Tk ! T; st: ii) Tk x ! T x on Y for every x 2 X ) Tk x ! T x on Y for every x 2 X, w w:st: iii)Tk x ! T x on Y for every x 2 X ) Tk x ! T x on Y for every x 2 X. But we note that the converses of (i), (ii) and (iii) are not true. The following examples are provided to clarify these. Example 3.1. Now, let us consider a sequence (Tk ) of operators Tk : `1 ! c is de…ned by Tk x =

( 1 ; 22 ; 33 ; :::); (0; 0; 0; :::);

k 2 [3p ; 3p + p) , p = 1; 2 ; ::: otherwise

where, x =( 1 ; 2 ; 3 ; :::)2 `1 : The operator Tk is linear and bounded for every k 2 N. (Tk ) is uniformly statistical operator convergent to T = 0 since p(p+1) 1 n : jjTk 0 jj "g n jfk 2 3p for every p; n 2 N such that 3p n < 3p+1 , where jjTk

0jj =

1; 0;

k 2 [3p ; 3p + p) , p = 1; 2 ; ::: otherwise

for each k 2 N: However, (Tk ) is not uniformly operator convergent to T = 0 since lim jjTk jj does not exist. k!1

Example 3.2. A sequence (Tk ) of operators Tk : `2 ! `2 is de…ned by 8 0; :::; 0; ; < (0; | {z } 1 Tk x = (k zeros) : (0; 0; 0; :::);

k 2 [3p ; 3p + p) , p = 1; 2 ; :::

2 ; 3 ; :::);

otherwise

3

GOKHAN: STATISTICAL CONVERGENCE, STATISTICAL CORE

where,x = ( 1 ; 2 ; 3 ; :::)2 `2 : This operator Tk is linear and bounded for every k 2 N. We show that (Tk ) is strongly statistical operator convergent to T = 0 since p(p+1) 1 n : jjTk x 0x jj " gj n jfk 2 3p p p+1 for every p; n 2 N such that 3 n 0, there exists such a set E N that (E) = 1 and 9 k0 (")2 E and an N (= N (")) 3 8k > k0 , k 2 E, kTk TN k < ": De…nition 4.2. The sequence (Tk ) is a strongly statistical operator Cauchy sequence provided that for every " > 0, for every x 2X there exists such a set Ex N that (Ex ) = 1 and 9k0 (= k0 ("; x))2 Ex and an N (= N ("; x)) 3 8k > k0 ; k 2 Ex ; kTk x TN xk < ": De…nition 4.3. The sequence (Tk ) is a weakly statistical operator Cauchy sequence provided that 8 " > 0; 8x 2X and 8f 2 Y p ;there exists such a set Ex;f N that (Ex;f ) = 1and 9 k0 2 Ex;f and an N (= N (")) 3 8k > k0 and k 2 Ex;f , jf (Tk x) f (TN x)j < ": Theorem 4.1. Let Y be a Banach space. Then (Tk ) is strongly statistical operator convergent on X if and only if (Tk ) is a strongly statistical Cauchy sequence on X. Proof: Assume that st-limTk x = T x for every x2 X:Then for every " >0 and for every x2 X there exists such a set E N that (Ex ) = 1 and 9 k0 2 Ex 3 8k k0 ; k 2 Ex ; kTk x T xk < "=2: If N is chosen so that kTN x T xk < "=2 for every x2 X then we have kTk x TN xk 0 be given and select p 2 N such that Tn(p) x T x < 2" for every x 2 X and " > 2 p : Note that if kTk x T xk " then Tn(p) x Tk x > 2" > 21 p ; and hencek is not element of Ap . It follows that fk : kTk x T xk "ghas asymptotic density zero for every x 2 X. Hence (Tk ) is strongly statistical operator convergent on X: Theorem 4.2. Let Y be a Banach space. Then (Tk ) is uniformly statistical operator convergent on X if and only if (Tk ) is a uniformly statistical Cauchy sequence on X. Proof: It can be shown in a similar way of Theorem 4.1.Therefore, we omit it. The next theorems are statistical analogues of a well-known theorems Theorem 4.3. Let every statistical Cauchy sequence in Y be statistical convergent sequence. Then every statistical Cauchy sequence in B(X; Y ) is statistical convergent. Proof: We consider an arbitrary statistical Cauchy sequence (Tk ) in B(X; Y ) and show that (Tk ) statistical converges to an operatorT 2 B(X; Y ). Since (Tk ) is statistical Cauchy, for every " > 0 and there exists such a set E N that (E) = 1 and 9k0 2 E; 9N 2 N 8k > k0 ; k 2 E ; kTk TN k < ": For all x2 X and 8k > k0 ; k; k0 2 E, we thus obtain kTk x

TN xk < " kxk

(1)

Now for any …xed x and given "1 ;we may choose " = "x so that "x kxk < ": From (1), we see that (Tk x) is a statistical Cauchy sequence in Y . Then (Tk x) is st: statistical convergent, say, Tk x ! y:Clearly, the statistical limit y 2 Y depends on the choice of x2 X: This de…nes an operator T : X ! Y; where y = T x. Since T ( x + y) = st-limTk ( x + y) = st-limTk x + st limTk y = T x + T y, the operator Tk is linear. Using the continuity of the norm and Lemma 5 [7], we obtain from (1) for every " >0 and 8k >k0 ; k2E; an N 2 N and all x2X;

7

(p+1)

GOKHAN: STATISTICAL CONVERGENCE, STATISTICAL CORE

kT xk = kTN x (TN x T x)k kTN xk + kTN x st-limTk xk c kxk + st lim kTN x Tk xk

(c + ") kxk

since TN is bounded. This shows that T is a bounded linear operator. Furtherst st: more, we obtain kTk T k = sup kTk x T xk ! 0 from Tk x ! T x for every kxk=1

x 2 X:

De…nition 4.4. The sequence (Tk ) is said to be statistical bounded if there exists such a set E N that (E) =1 and exists a Mx > 0, such that kTk xk Mx for all k 2 E and every x 2 X: If Mx = M , then (Tk ) is said to be uniformly statistical bounded. Lemma 4.1: Let (Tk ) be a sequence of operators Tk 2B(X; Y ) and B r (x0 ) X be a closed ball such that (kTk xk) is statistical uniform bounded for 8x2B 0 = B r (x0 ); say, sup kTk xk = c; k2E

where c is a real number and (E)=1: Then the sequence (kTk k) is statistical bounded. rx Proof:Let x 2 X be arbitrary, not zero. Then x0 + kxk x0 < r; so that rx x0 + kxk 2 B0 : This yields for all k 2E; where the set E N and (E) = 2c 2c 1; kTk xk r kxk :Hence for all k 2E; kTk k = sup kTk xk r : kxk=1

Theorem 4.4. Let (Tk ) 2 B(X; Y ); where X is a Banach space and Y a normed space. If there exists a set E N of asymptotic density 1 such that for 8x2 X and for 8k 2 E; kTk xk cx where cx is a real number, then there is a c such that kTk k c for all k 2 E: Proof: We assume that the sequence of the norms kTk k is not bounded on the set E. Then the sequence (kTk xk) is not also bounded on the set E and on all closed ball in X since Lemma 4.1. Then there exists 9x1 2B 0 and 9n1 2E such that kTn1 x1 k > 1: Since Tn1 is continuous and so is the norm, we have kTn1 xk > 1 on B 1 = B r1 (x1 ) B 0 : Hence there exists 9 x2 2B 1 and n2 > n1 ; 9n2 2 E; kTn2 x2 k > 2: Continuing in this way we obtain a sequence (xk ) and (B k ) of closed balls such that xk 2 B k and B 0 B 1 ::: B k ::: and kTnk xk > k for nk 2 E and on the B k : This yields for n1 < n2 < ::: and x2 X; kTnk xk > k; where nk 2 E for k = 1; 2; :::. Hence we see that the sequence (Tn x) is not statistically bounded.

8

663

664

GOKHAN: STATISTICAL CONVERGENCE, STATISTICAL CORE

Theorem 4.5. Let Tk 2B(X; Y ); where X is a Banach space and Y a normed space. If (Tk ) is strongly statistical operator convergent with limit T , then T 2B(X; Y ): Proof : From Theorem 4.4, the proof is trivial. 5.Statistical core of sequences of operators Let (Tk ) be a sequence of operators. For any sequence T = (kTk xk);the statistical limit superior of T and statistical limit inferior of T are sup ET;x ; if ET;x 6= ? 1 ; if ET;x = ? inf FT;x ; if FT;x 6= ? st liminf Tx = +1 ; if FT;x = ? where ET;x = fax 2 R= (fk : kTk xk > ax g) 6= 0g and FT;x = fbx 2 R= (fk : kTk xk < bx g) 6= 0g for every x2X: For example, consider the sequence (Tk ) of operators Tk : `1 ! `1 de…ned by st

limsupTx =

8 < (k; 0; :::; 0; :::); k = n2 (2 1 ; :::; 2 n ; :::); k = 6 n2 Tk x = : (3 1 ; :::; 3 n ; :::); k 6= n2

n = 1; 2; ::: and k = 2n 1; n = 1; 2; ::: and k = 2n; ; n = 1; 2; :::

where x = ( 1 ; :::; n ; :::) 2 `1 : The operator Tk is linear and bounded for every …xed k 2 N: It is easy to see that ET;x = ( 1; 3 kxk) and FT;x = (2 kxk ; +1) since 8 k = n2 n = 1; 2; ::: < k; 2 kxk ; k 6= n2 and k = 2n 1; n = 1; 2; :: : kTk xk = : 3 kxk ; k 6= n2 and k = 2n; ; n = 1; 2; ::: Clearly, kTk xk is unbounded but it is statistical bounded. For this sequence, st limsupTx = 3 kxk and st liminf Tx = 2 kxk : Furthermore, we can easly obtain from [4] that for any bounded sequence T = (kTk xk) (i.e. sup kTk xk x + "g) = 0;and st liminf Tx = x ()for x any " > 0 and for every x 2 X, (fk : kTk xk < x + "g) 6= 0 and (fk : kTk xk< x "g) =0. We can de…ne the statistical core of bounded linear operators as follows: De…nition 5.1. For any statistical bounded operator sequence T = (kTk xk), the statistical core of T is the closed interval [st liminf Tx ; st limsupTx ].

9

GOKHAN: STATISTICAL CONVERGENCE, STATISTICAL CORE

665

If (Tk ) is not statistical bounded, the statistical core of T is de…ned by either ( 1; st limsupTx ]; [st liminf Tx ; 1) or ( 1; 1): Let X and Y be two nonempty subset of the spaces of complex sequences. Let A = (ank )(n; k = 1; 2; :::) be an in…nite matrix. We write Ax = (An (x)) 1 P if An (x) = ank xk converges for each n 2 N. Thus,we say that the matrix k=1

A de…nes a matrix transformation from X into Y . A is called regular if x2 c implies Ax 2 c and preserves the limit, where c is convergent sequences space. In [4] Fridy and Orhan prove necessary and su¢ cient conditions for which the inequalities limsup Ax

st

limsup x

and st

liminf x

liminf Ax

for every x2`1 . Now, Similarly, we will give these results for sequences of bounded linear operators. Let A = (ank ) be an in…nite summability matrix. For a given sequence of bounded linear operators = (Tn ), the sums 1 P ATn x = ank Tk x k=1

are called the A-transform of the provided the series converges for each k 2 N and for all x2 X and denoted by A = (ATn ).

Lemma 5.1: Let(Tn ) be a sequence of uniformly bounded operators ( i.e. there is a positive number M such that kTn xk < M for all x2X and for all 1 P n 2 N ) Tn 2 B(X; Y ):Suppose the matrix A satis…es supn jank j < 1 then k=1

limsup(sup kATn xk)

st

x2X

limsup(sup kTn xk)

(2)

x2X

if and only if P i) A is regular matrix and lim jank j = 0 for E 2 N such that (E) = 0; n

ii) lim n

1 P

k=1

k2E

jank j = 1.

Proof: (Necessity) Assume that for any uniformly bounded sequence (kTn xk) on Y , the matrix A = (ank ) satis…es condition (2). Let an = sup kTn xk :Then x2X 1 P

y = (an ) is a positive real number sequence and y 2 `1 :Since supn 1, we obtain that sup (sup kATn xk)

n2N x2X

sup (sup n2N x2X

X k

sup ak ( sup k2N

10

jank j kTk xk)

X

n2N

k

jank j) < 1;

k=1

jank j
x2X

sup kTk xk

x

x2X

X

jank j kTk xk < K

x

x

= st

k

jank j < 1:

lim sup(sup kTn xk):Then we have E = x2X

+ "g for a given " >0 and (E) = 0: Hence it is clear that

+ " for k 2 = E. Now we can write

ank Tk x

=

k

X k

(jank j kTk xk + ank Tk x)2

X

ank Tk x +

k

K

X

k2E

K

X

k2E

X k

jank j +

k

( jank j

X

jank j + (

x

1

X k

(jank j kTk xk

x2X

+ ")

X

k2E =

x

+ ")

k2E

ank Tk x)2

ank ) kTk xk

jank j (sup kTk xk) + K

k2E =

Then we conclude that X X sup ank Tk x K jank j + ( x2X

X

jank j + K

X

k2E =

X k

jank j + K

X k

( jank j

( jank j

X k

( jank j

ank )

ank ):

ank ):

Using the (i) and (ii), we have limsup (sup kATn xk) x + ": x2X

Since " is arbitrary, this completes the proof: It is clear that one can prove a similar way in the following lemma. Lemma 5.2. Let (Tk ) be a sequence of uniformly bounded operators Tk 2 B(X; Y ): 1 P Suppose the matrix A satis…es supn jank j < 1 then k=1

st

liminf ( inf kTk xk) x2X

liminf ( inf kATk xk) x2X

if and only if P i) A is regular matrix and lim jank j = 0 for E n

ii) lim n

1 P

k=1

k2E

jank j =1.

11

N such that (E) = 0;

1

GOKHAN: STATISTICAL CONVERGENCE, STATISTICAL CORE

From Lemma 5.1 and 5.2, we give the following theorem. Theorem 5.1: Let (Tk ) be a sequence of uniformly bounded operators 1 P Tk 2 B(X; Y ). Suppose the matrix A satis…es supn jank j < 1 then k=1

limsup(sup kATn xk) x2X

and st

liminf ( inf kTk xk) x2X

st

limsup(sup kTn xk) x2X

liminf ( inf kATk xk)" x2X

if and only if conditions (i) and (ii) in Lemma 5.2 are satis…ed. References [1] J. Connor : Two valued measures and summability. Analysis 10 (1990),373385. [2] H. Fast : Sur la convergence statistique. Colloq. Math. 2 (1951), 241244. [3] J. A. Fridy : On statistical convergence. Analysis 5 (1985), 301-313. [4] J. A. Fridy and C.Orhan : Statistical limit superior and limit inferior. Proc. Amer. Math. Soc. 125 (1997), 3625-3631. [5] A.(Türkmeno¼glu) Gökhan and M. Güngör : On pointwise statistical convergence. Indian J. Pure appl. Math. 33(9) (2002), 1379-1384. [6] T. Šalat : On statistically convergent sequences of real numbers. Math. Slovaca 30 (1980), 139-150. [7] I. J. Schoenberg : The integrability of certain functions and related summability methods. Amer. Math. Monthly 66 (1959), 361-375. [8] B.C. Tripathy : On statistically convergent sequences. Bull. Cal. Math. Soc. 90 (1998), 259-262

12

667

JOURNAL 668 OF CONCRETE AND APPLICABLE MATHEMATICS, VOL.8, NO.4, 668-681, 2010, COPYRIGHT 2010 EUDOXUS PRESS, LLC

THE FLOW OF A LIQUID WITH CAVITATION Ivan Straˇskraba, Emil Vit´asek∗ June 25, 2009

Institute of Mathematics of the Academy of Sciences of the Czech Republic, Prague, Czech Republic e-mail addresses: [email protected], [email protected] Abstract The aim of this paper is to find, in a closed form, special solutions of equations describing a one-dimensional non-stationary flow of a liquid containing dissolved gas. The special feature of these solutions is that despite the fact that they do not satisfy all initial and boundary conditions, they describe a physical characteristic qualitatively analogous to that described by the original equations. Thus these special solutions may prove useful means for judging the reliability of the original mathematical model of the problem.

Mathematics Subject Classification (2000): 34A05 Keywords: Compressible fluid, Navier-Stokes equations, asymptotic behavior

1

Introduction

In the paper we analyze the mathematical model of the flow of a column of a real liquid. It is known that cavitations play an important role not only in the theory of fluids but may be even more significant in the engineering and technological practice. Let us mention the cavities and bubbles which appear at suction compartments of pumps, in turbines, or in hydraulic machinery. Monitoring of possible separation of the gas from liquids is important since if it appears, then there is a danger of failure when enormous forces are loaded to devices which serve, for example, in the building industry. Therefore, a dynamic model of a two-phase flow has been derived under physically realistic assumptions. This has been done in the former Institute for Construction of Machines in Bˇechovice, ∗ The research was supported by the Academy of Sciences of the Czech Republic, Institutional Research plan No. AV0Z10190503, and partially supported by the grant of the Grant Agency of the Czech Republic No. 201/08/0315

1

669

2

I. Straˇskraba, et al.

Czech Republic (see [1]). To preserve the author’s and the institute’s rights, we do not present here the derivation of the model and refer the interested readers ˇ ıba at the Technical University of Liberec, Czech Republic to Professor Jan Skl´ ([email protected]). The purpose of this paper is to partially mathematically analyze the model to be defined below. Our analysis is based on a special a priori assumed form of solutions to describe certain features of the flow in question. We balance this limitation by providing solutions in a closed form. In spite of this restriction we believe that our approach is useful in the engineering practice, based on the authors’ experience in application of similar mathematical analysis to other problems.

2

The formulation of the problem

The equations of the flow of a real liquid of the length l is possible to write in the form (see for instance [1]): wt + ρ−1 (2.1) 0 px + f (w) = 0, 2 pt + ρ0 c (p, γ)wx = 0, (2.2) γt + wγx = g(γ, p), x ∈ (0, l), t ∈ (0, T ), (T > 0), (2.3) w(x, 0) = w0 (x), (2.4) p(x, 0) = p0 (x), (2.5) γ(x, 0) = γ0 (x), x ∈ [0, l], (2.6)   ˙ C p(0, t), γ(0, t) pt (0, t) + QV p(0, t), H(t) − S0 w(0, t) + ϕH(t) = 0, (2.7) w(l, t) = h(t), (2.8)  ¨ ˙ H(t) + Φ t, H(t), H(t), p(0, t), pt (0, t) = 0, t ∈ [0, T ], (2.9) ˙ H(0) = H0 , H(0) = H1 . (2.10) The quantities occurring in (2.1)–(2.10) have the following meaning: w = w(x, t) p(x, t) γ = γ(x, t) ρ0 c = c(p, γ) f = f (w) g = g(γ, p) K u , Kr

KH

the velocity of the liquid in the point x and in the time t, the pressure, the mass of the freed air in the unit volume of the liquid, the density of the liquid, the sound velocity in the liquid and in the liquid containing the air, respectively (given function of p, γ), the coefficient of the resistance (the friction of the liquid on(the wall of the duct), an odd function,  Ku (¯ γ − γ)/KH − p , if (¯ γ − γ)/KH ≥ p,  = Kr (¯ γ − γ)/KH − p if (¯ γ − γ)/KH < p, the constants characterizing the proportionality of the velocity of loosening, and dissolution on the pressure gradient, respectively, the coefficient of absorption,

670

The flow of a liquid with cavitation γ¯ w0 , p0 , γ0 C = C(p, γ) H QV = QV (p, H) S0 ϕ h H0 , H1

3

the total mass of the air in the unit volume, the initial distribution of the velocity, the mass, and the pressure of the loosened air in unit volume, respectively, the hydraulic capacity (the given function of p, γ), the throw of the valve, the flow through the valve (the given function of p, H), the cross-section of the duct, the acting facing of the valve, the flow rate caused by the hydrogenerator at the end of the duct, the initial position, and the velocity of the valve, respectively.

In what follows, we assume that all given functions are sufficiently smooth, and the solution will be sought smooth as well, i.e., continuously differentiable. The special solutions of our problem will be the functions w, p, γ satisfying equations (2.1), (2.2) and (2.3). The special feature of these solutions is that despite the fact that they do not satisfy all initial and boundary conditions, they express physical characteristics qualitatively analogous to those described by (2.1)–(2.10).

3

Stationary solution

Three functions w = w(x), p = p(x), γ = γ(x) depending only on the length coordinate x of the tube, and satisfying equations (2.1) to (2.3) are understood as stationary solution. These equations written for functions independent of the variable t form a simple system of three ordinary differential equations ([2]) 0 ρ−1 0 p + f (w) = 0, ρ0 c2 (p, γ)w0 = 0, 0 wγ = g(γ, p), x ∈ (0, l),

(3.1) (3.2) (3.3)

where p0 = dp/dx etc. Analogously as in [1] the function c(p, γ) will be assumed in the form c1 p2 , (3.4) c(p, γ) = 2 c2 p + γ + c3 where ci > 0, i = 1, 2, 3 are constants. The physical principles suggest the condition c(p, γ) > 0. Since the trivial solution with p = 0 is not interesting, the equation (3.2) gives us w0 = 0 and from here we have w = w0 = constant.

(3.5)

p(x) = p0 − ρ0 f (w0 )x,

(3.6)

Consequently, (3.1) implies

671

4

I. Straˇskraba, et al.

and for the function γ we obtain the equation γ0 =

 1 g γ, p0 − ρ0 f (w0 )x . w0

(3.7)

The constants w0 , p0 may be chosen arbitrarily. Also the integration of (3.7) gives an additional free integration constant. The heuristic considerations indicate that the stationary solution should be a limit of a non stationary solution for t → ∞. In order to respect the boundary conditions partially at least we will require the stationary solution to satisfy the generalized limit of boundary conditions. More exactly, we impose the condition (compare with the (2.8)) 1 w0 = lim t→∞ t

Z

t

h(s)ds.

(3.8)

0

Naturally, we must suppose that the function h is such that this limit exists. The limit is requested in that sense since the delivery of the hydrogenerator may regularly oscillate about a certain value so that the limit limt→∞ h(t) need not exist. The number p0 in (3.6) is then determined from (2.7), i.e., from QV (p0 , H0 ) − S0 w0 = 0

(3.9)

supposing that equation (3.9) is solvable with respect to p0 . It remains to determine the function γ from (3.7). It is obvious that the sign of w0 indicates the direction of the flow. If the hydrogenerator is supposed to be at the point x = l it is logical that w0 < 0 and then it is necessary to describe γ(l) = γ0 – the mass of loosening air in the unit volume of incoming liquid. Since the function g in the right-hand term of (3.7) is given by different formulas for loosening and dissolution of the air we must distinguish between the following two cases:    γ¯ − γ γ¯ − γ   K − p , if ≥p  u K KH H g(γ, p) = (3.10)     Kr γ¯ − γ − p , if γ¯ − γ < p. KH KH Let us define the functions ϕ(x) = γ¯ − KH [p0 − ρ0 f (w0 )x], and K(ξ) =

 Ku   w K 0

(3.11)

≡ −K1 , if ξ ≥ 0

H

 Kr   ≡ −K2 , if ξ < 0, Ki > 0, i = 1, 2. w0 KH

Then it is possible to rewrite equation (3.7) in the form  γ 0 = (ϕ(x) − γ) · K ϕ(x) − γ ,

(3.12)

(3.13)

672

The flow of a liquid with cavitation

5

or, if we put y(x) = ϕ(x) − γ(x), 0

0

(3.14)

y + yK(y) = ϕ (x).

(3.15)

ϕ0 (x) = KH ρ0 f (w0 ) ≡ ϕ0 = const.

(3.16)

According to (3.11) it is

If (¯ γ − γ0 )/KH ≥ p(l) = p0 − ρ0 f (w0 )l, then y(l) ≥ 0, and the continuity of the solution y of (3.15) implies K(y) = Ku /w0 KH = −K1 = const. for x < l sufficiently close to k l. Consequently, equation (3.15) is linear on this interval and we have Z x   y(x) = exp K1 (x − l) y(l) + exp K1 (x − ξ) ϕ0 dξ l

 exp(K1 (x − l)) − 1 ϕ0 . = exp K1 (x − l) y(l) + K1

(3.17)

Since w0 < 0 and the function f (w) describing the friction is necessarily odd and positive for positive w’s, from (3.16) and (3.17), it follows that y(x) > exp(K1 (x − l))y(l) ≥ 0 in the whole interval [0, l]. But this fact means that the function y(x) is defined for all x ∈ [0, l] by formula (3.17). Using (3.17), (3.14) and (3.11) we then obtain for the function γ(x) the formula γ(x) = ϕ(x) − y(x) = γ¯ − KH [p0 − ρ0 f (w0 )x]   − exp −K1 (l − x) γ¯ − KH [p0 − ρ0 f (w0 )l] + γ0 −

1 − exp(−K1 (l − x)) KH ρ0 f (w0 ) K1

(3.18)

for x ∈ [0, l], and K1 = −Ku /w0 KH . On the other hand, let (¯ γ − γ0 )/KH < p(l) = p0 − ρ0 f (w0 )l, i.e., y(l) < 0. Then it follows – again from the continuity – that K(y) = −K2 in (3.15) for x ∈ (x? , l] where x? is the upper bound of numbers in the interval (−∞, l) satisfying y(x? ) = 0. On the interval (x? , l], the solution y(x) is given by formula exp(K2 (x − l)) − 1 y(x) = exp(K2 (x − l))y(l) + ϕ0 . (3.19) K2 If x? ≤ 0 then the solution of (3.15) is given by (3.19) in the whole interval [0, l]. From the condition y(x? ) = 0 and from (3.19) we obtain the unique point ! 1 ϕ0 ? x =l+ ln . (3.20) K2 ϕ0 + K2 y(l) If x? > 0 which means the assumption γ0 < γ¯ − KH [p0 − ρ0 f (w0 )l] +

 2 KH ρ0 f (w0 )w0  Kr l  exp − −1 , Kr w0 KH

(3.21)

673

6

I. Straˇskraba, et al.

as we obtain after elementary calculation and substitution from (3.16), (3.14) and (3.12), then y(x) is given by (3.19) only for x ∈ (x? , l]. In the interval [0, x? ] we then easily obtain that Z x   ? ? exp K1 (x − ξ) ϕ0 dξ y(x) = exp K1 (x − x ) y(x ) + x?

=

2  KH ρ0 w0 f (w0 )  Ku 1 − exp (x? − x) , Ku w0 KH

(3.22)

(y(x? ) = 0 !), and for x? it is necessary to substitute from (3.20). If we substitute from (3.14), (3.11) in the formulae (3.19) and (3.12), respectively we obtain the formula for γ(x) analogously as in (3.18). The stationary problem is completely solved.

4

Oscillatory solution

Oscillatory solutions are called such solutions of equations (2.1) to (2.3) which do not depend on the space variable x. Thus, w = w(t), p = p(t), γ = γ(t). In the special case, the system (2.1) to (2.3) may be written as follows: w˙ + f (w) = 0, p˙ = 0 γ˙ = g(γ, p),

t > 0.

(4.1) (4.2) (4.3)

We see that the equations are practically separated since if we compute p(t) = p0 = const.,

(4.4)

from (4.2) we have two separate equations, (4.1) and γ˙ = g(γ, p0 ).

(4.5)

The solvability of equation (4.1) for t ∈ (0, ∞) is guaranteed by the assumption m = inf f 0 (w) > −∞. w∈R

(4.6)

(f 0 (w) is continuous). The proof of this assertion is an elementary consequence of the theory of ordinary differential equations. Assumption (4.6) is practically always fulfilled. If, for example, f (w) = k|w|w (k > 0 constant), then f 0 (w) = 2k|w| ≥ 0 = m. As far as equation (4.5) is concerned the global existence of the solution is guaranteed by the inequality |g(γ1 , p0 ) − g(γ2 , p0 )| ≤ max{Ku /KH , Kr /KH }|γ1 − γ2 |, γ1 , γ2 ∈ R which follows from (3.10). This condition means nothing else than that the function g is globally Lipschitzian with respect to the variable γ. Naturally, it is possible, for w and γ, to prescribe the initial conditions w(0) = w0 , γ(0) = γ0 . (4.7)

674

The flow of a liquid with cavitation

7

This is a possibility how to satisfy, at least partially, the initial conditions (2.4) to (2.6), naturally only with constant functions w0 (x) = w0 , p0 (x) = p0 , γ0 (x) = γ0 . On the other hand, boundary conditions (2.7), (2.8) will be never satisfied by this type of solution. Equations (4.1), (4.5), (4.7) can be solved numerically by well known approximate methods for solving ordinary differential equations. The solution in a closed form can be obtained only for a particular choice of the function f . Hence, let us investigate the case f (w) = k|w|w. (4.8) Rw If w0 > 0 then we have, due to (4.1), (4.8), w0 dw/kw2 = −t and from here w(t) =

w0 . 1 + kw0 t

(4.9)

w0 . 1 − kw0 t

(4.10)

If w0 < 0 we obtain analogously w(t) =

The formulae (4.9), (4.10) may be, for both cases, joined in one w(t) =

w0 . 1 + k|w0 |t

(4.11)

Equation (4.5) may be now rewritten in the form y˙ + yK(y) = 0,

(4.12)

where y(t) = γ¯ − p0 KH − γ(t),

K(ξ) =

 Ku    K , if ξ ≥ 0 H

 K   r , if ξ < 0. KH

(4.13)

If y0 ≡ γ¯ − p0 KH − γ0 ≥ 0, the solution of (4.12) is the function y(t) = exp(−(Ku /KH )t)y0 . If y0 < 0 then y(t) = exp(−(Kr /KH )t)y0 . Using (4.13) we have from here  K  u γ(t) = γ¯ − p0 KH − exp − t , if γ0 ≤ γ¯ − p0 KH (4.14) KH and  K  r t , γ(t) = γ¯ − p0 KH − exp − KH

if γ0 > γ¯ − p0 KH .

(4.15)

The oscillatory solution is given by formulae (4.4), (4.11), (4.14) and (4.15). Here the oscillatory solution do not respect fully its appellation. Namely, the functions (4.14), (4.15) stabilize for t → ∞ to the steady state value γ¯ − p0 KH without any overswing.

675

8

5

I. Straˇskraba, et al.

Combined solutions

By combined solution we mean in this context such solution of equations (2.1) to (2.3) which is neither stationary - nor oscillatory, and for which at least one of the functions w, p, γ depends only on x or t. Consider first such solutions, for which w = w(t), p = p(x). For this case the equations (2.1), (2.2), (2.3) have the form 1 0 p + f (w) = 0, ρ0 γt + wγx = g(γ, p).

w˙ +

(5.1) (5.2)

Equation (2.2) is obviously satisfied identically. From (5.1) it follows p0 (x) = const.

(5.3)

Instead of (2.7), (2.8), choose for p the boundary conditions p(0) = p0 ,

p(l) = p1 ,

(5.4)

which imitate the gradient of the pressure caused by the hydrogenerator. Then p(x) = p0 +

p1 − p0 x, l

(5.5)

as it follows from (5.3). Consequently, (5.1) gives w˙ + f (w) =

p0 − p1 ρ0 l

(5.6)

for the function w. If we suppose (4.6) and supply the initial condition w(0) = w0

(5.7)

we know that there exists a global solution w(t) of problem (5.6), (5.7). If we know such solution we can find the function γ from the equation γt + w(t)γx = g γ, p0 +

p1 − p0  x l

(5.8)

by the method of characteristics. Naturally it is necessary to add the corresponding initial and boundary conditions. The initial condition may be given in the quite general form γ(x, 0) = γ0 (x).

(5.9)

If it is w(t) > 0, we must prescribe γ(0, t) = γ 0 (t),

(5.10)

676

The flow of a liquid with cavitation

9

i.e., the concentration of the air in the liquid flowing into the tube from the end x = 0. If w(t) < 0 it is necessary to prescribe γ(l, t) = γ 1 (t),

(5.11)

for the liquid flowing into the tube from the end x = l. Boundary conditions (5.10), (5.11) may change one for other during the time but never can be prescribed both together since w is independent of x. Hence, the problem given by (5.8), (5.9), and (5.10) and (5.11), respectively must be solved after such time-intervals, on which the function w(t) does not change the sign. In order to avoid the complicated testing let us choose a particular situation which is also applicable in practice. Suppose that p0 − p1 − f (w0 ) < 0, ρ0 l f (−ξ) = −f (ξ), f 0 (ξ) ≥ 0, ξ ∈ R.

p1 > p0 , w < 0,

(5.12)

Physically, these assumptions mean (i) the pressure on the left-hand side of the tube is greater than on the righthand side; (ii) the liquid flows from the right-hand side to the left-hand side at the beginning; (iii) the difference of the pressures is not yet weighted by the force of the resistance of the duct at the beginning; (iv) the force of the resistance always acts against the direction of the motion of the fluid and it does not abate with increasing velocity. Assumptions (5.12) and equations (5.6), (5.7) imply w˙ < 0. Thus w(t) is a decreasing function for increasing t either for any t > 0 or there exists a t? > 0 such that f w(t? ) = (p0 − p1 )/ρ0 l and then w(t) = w(t? ) for t ≥ t? . The reason is that the function w(t) = w(t? ) is a solution of equation (5.6) on the interval (t? , ∞) with the initial condition w(t? ) and that the equation (5.6) has the unique solution for the given initial condition. Hence w(t) < 0 for all t ≥ 0 and we have to solve the problem (5.8), (5.9), (5.11) for determining γ. Let us apply the method of characteristics. Let x ∈ [0, l], t > 0 be arbitrary. Put Z τ X (τ ; x, t) = x + w(s)ds. (5.13) t

Then Xτ (τ ; x, t) = w(τ ). If we put, moreover, φ(τ ) = γ(X (τ ; x, t), τ ).

(5.14)

677

10

I. Straˇskraba, et al.

we have ˙ ) φ(τ

= γx Xτ + γt = γt + wγx  p1 − p0 = g φ(τ ), p0 + X (τ ; x, t) . l

(5.15)

Thus if x and t are given we obtained a differential equation for the function φ(τ ) given by (5.14). If we find φ(τ ) for τ ∈ [0, t], then γ(x, t) = φ(t)

(5.16)

as it follows from (5.14), (5.13). The initial condition for the function φ(τ ) is given either by the formula  φ(0) = γ X (0; x, t), 0 = γ0 x −

Z

t

 w(s)ds

(5.17)

0

if the characteristic ξ = X (τ ; x, t) falls on the axis τ = 0 in the interval 0 ≤ ξ ≤ l or by the formula  φ(τ ? ) = γ l, τ ? = γ 1 (τ ? ) (5.18) where X (τ ? ; x, t) = l if this characteristic falls on the axis ξ = l at some point τ = τ ? > 0. The value of τ ? is to be computed from an implicit equation Z

t

x−

w(s)ds = X (τ ? ; x, t).

(5.19)

τ?

Put y(τ ) = ϕ(τ ; x, t) − φ(τ ),

(5.20)

Z τ   p 1 − p0 x+ w(s)ds ϕ(τ ; x, t) = γ¯ − KH p0 + l t

(5.21)

where

and

K(ξ) =

 Ku    K , if ξ ≥ 0 H

 K   r , if ξ < 0. KH

(5.22)

Then (5.15) can be written in the form y˙ + yK(y) = ϕτ and it is, according to (5.21) and (5.12), ϕτ = KH ((p0 − p1 )/l)w(τ ) > 0. Solve the equation y˙ + yK(y) = KH

p 0 − p1 w(τ ) l

(5.23)

678

The flow of a liquid with cavitation

11

with the initial condition y(0) = ϕ(0; x, t) − φ(0) Z t Z 0    p1 − p 0 = γ¯ − KH p0 + w(s)ds ≡ y0 x+ w(s)ds − γ0 x − l 0 t (5.24) which corresponds to the situation when the characteristic ξ = X (τ ; x, t) falls on the axis ξ. If it is now y0 ≥ 0, then the solution of problem (5.23), (5.24) is the function Z  K  K   p0 − p1 τ u u exp − y(τ ) = exp − τ y0 + KH (τ − s) w(s)ds (5.25) KH l KH 0 for τ ∈ [0, t]. If y0 < 0, then the solution of problem (5.23), (5.24) is the function Z  K   K  p0 − p1 τ r r exp − τ y0 + KH (τ − s) w(s)ds (5.26) y(τ ) = exp − KH l KH 0 as far as y(τ ) < 0. If τ1 ≡ inf{τ ; 0 < τ ≤ t, y(τ ) = 0} < t, then it is necessary to prolong the solution (5.26) onto the interval [τ1 , t] by the formula Z  K  p 0 − p1 τ u exp − y(τ ) = KH (τ − s) w(s)ds. (5.27) l KH τ1 When the characteristic ξ = X (τ ; x, t) falls on the axis ξ = l we proceed completely analogously; as an initial condition we use y(τ ? ) = ϕ(τ ? ; x, t) − φ(τ ? ) Z τ?   p 1 − p0 x+ = γ¯ − KH p0 + w(s)ds − γ 1 (τ ? ) ≡ y1 l t

(5.28)

and integrate the equation over the interval (τ ? , t]. At the same time the value τ ? = τ ? (x, t) will be computed from (5.19). Again, it is necessary to distinguish whether y1 ≥ 0 or y1 < 0. In the first case we obtain   Ku (τ − τ ? ) y1 y(τ ) = exp − KH   Z p0 − p1 τ Ku (τ − s) w(s)ds, τ ∈ [τ ? , t]. (5.29) + KH exp − l KH τ? In the second case it is necessary, moreover, to distinguish whether y(τ1 ) = 0 for some (smallest one) τ1 ∈ (τ ? , t) or y(τ ) < 0 for all τ ∈ (τ ? , t). In the latter case we have Z    K  Kr p0 − p 1 τ r y(τ ) = exp − (τ − τ ? ) y1 + KH exp − (τ − s) w(s)ds KH l KH τ? (5.30)

679

12

I. Straˇskraba, et al.

for all τ ∈ [τ ? , t]. In the opposite case the formula (5.30) is valid only for τ ∈ [τ ? , τ1 ] and for τ ∈ (τ1 , t), we must, moreover, y(τ ) prolong using the formula Z  K  p 0 − p1 τ u exp − (τ − s) w(s)ds, τ ∈ (τ1 , t). (5.31) y(τ ) = KH l KH τ1 Finally, the value of γ(x, t) is found from formula (5.16), where φ(t) = ϕ(t; x, t)− y(t), ϕ is given by (5.21), y(t) by formulae (5.25) to (5.28), the values y0 and y1 by (5.24) and (5.28), respectively, and τ ? by equation (5.19). Hence, if Z 0 Z t     p 1 − p0 (5.32) x+ w(s)ds , γ0 x − w(s)ds < γ¯ − KH p0 + l t 0 then, according to (5.26), (5.20) and (5.25), we have γ(x, t) = φ(t) = ϕ(t; x, t) − y(t)  p 1 − p0  = γ¯ − KH p0 + x Z 0  K    p1 − p 0  u t γ¯ − KH p0 + x+ − exp − w(s) ds KH l t Z Z t  K    p0 − p 1 t u −γ0 x − exp − w(s) ds − KH (t − s) w(s)ds. l KH 0 0 (5.33) In other cases we proceed analogously. We will not introduce the resulting formulae since the procedure consists in fact only in the routine substitution even though formally complicated. It is clear that we may not be able to compute w(t) in a closed form for general function f . Consequently also the formulae for γ(x, t) will not be explicit in general. On the other hand let us introduce the concrete formulae for the physically interesting special case, namely, that f (w) = k|w|w. Then equation (5.6) has the form w˙ =

p0 − p 1 + kw2 ρ0 l

(5.34)

since w < 0 as it follows from our preceeding investigations and thus k|w|w = −kw2 . Moreover it is p1 > p0 . Put z(t) =

 kρ l 1/2 0 w(t), t > 0, p 1 − p0

z(0) = z0 =

 kρ l 1/2 0 w0 . p1 − p 0

(5.35)

Then (5.34) can be written in the form  k(p − p ) 1/2 d   z − 1  1 0 log =2 . dt z+1 ρ0 l

(5.36)

680

The flow of a liquid with cavitation

Integrating (5.36) over the interval (0, t) we get z − 1 z + 1 0 = αt, log z + 1 z0 − 1 where

 k(p − p ) 1/2 1 0 α=2 . ρ0 l

13

(5.37)

(5.38)

Applying to (5.37) the function exp we obtain z−1 z0 − 1 αt = e . z+1 z0 + 1

(5.39)

After straightforward but rather lengthy calculation we arive at the formula w(t) =

 p − p 1/2  p − p 1/2 αw + 2 + (αw − 2)eαt 1 0 1 0 0 0 , z(t) = kρ0 l kρ0 l αw0 + 2 + (αw0 − 2)eαt

(5.40)

where α is given by (5.38) and k is the constant of friction in (4.8). Notice, that  p − p 1/2 1 0 lim w(t) = − t→∞ kρ0 l which corresponds to the equilibrium state. Since the limit speed of the fluid is negative, the fluid flows from the end x = l in direction to x = 0. This in accordance with the physical idea that the fluid flows from the place of higher pressure to the region, where the pressure is lower. All three special solutions which we have introduced have a common deficiency. They are not influenced by the dependence of the sound speed c = c(p, γ) on the pressure and the concentration of the air since the member c2 (p, γ)wx in equation (2.2) in all cases vanishes. From that reason it seems reasonable to drop the member f (w) by putting it equal to zero and to investigate the solution of the type w = w1 x + w0 , p = p(t), γ = γ(t) where w0 , w1 are constants. Then equation (2.1) (with f ≡ 0) is satisfied identically and from (2.2), (2.3) we obtain the system of two ordinary differential equations p(t) ˙ = ρ0 c2 (p(t), γ(t)), γ(t) ˙ = g(p(t), γ(t)), t > 0 with the initial conditions p(0) γ(0)

= p0 = const. = γ0 = const.

The analytic investigation and the numerical solution of this simple system can bring the tentative idea of the influence of the dependence of c = c(p, γ) on the behaviour of the system and thus to obtain comparative characteristic for the solution of the general problem and for the correspondence with the physical concept of the behaviour of the system.

681

14

I. Straˇskraba, et al.

References ˇ ıba, I. Straˇskraba, M. Stengl, ˇ [1] J. Skl´ Extended mathematical model of safety ´ hydraulic circuit. Report SVUSS Bˇechovice, Czech Republic registered as: ´ SVUSS 88-03022, December 1988. [2] E.A.Coddington, N.Levinson, Theory of Differential Equations. McGraw Hill, New York, 1955.

682

Instructions to Contributors Journal of Concrete and Applicable Mathematics A quartely international publication of Eudoxus Press, LLC, of TN.

Editor in Chief: George Anastassiou Department of Mathematical Sciences University of Memphis Memphis, TN 38152-3240, U.S.A.

1. Manuscripts hard copies in triplicate, and in English, should be submitted to the Editor-in-Chief: Prof.George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152, USA. Tel. 901.678.3144 e-mail: [email protected] Authors may want to recommend an associate editor the most related to the submission to possibly handle it. Also authors may want to submit a list of six possible referees, to be used in case we cannot find related referees by ourselves. 2. Manuscripts should be typed using any of TEX,LaTEX,AMS-TEX,or AMS-LaTEX and according to EUDOXUS PRESS, LLC. LATEX STYLE FILE. (Click HERE to save a copy of the style file.)They should be carefully prepared in all respects. Submitted copies should be brightly printed (not dot-matrix), double spaced, in ten point type size, on one side high quality paper 8(1/2)x11 inch. Manuscripts should have generous margins on all sides and should not exceed 24 pages. 3. Submission is a representation that the manuscript has not been published previously in this or any other similar form and is not currently under consideration for publication elsewhere. A statement transferring from the authors(or their employers,if they hold the copyright) to Eudoxus Press, LLC, will be required before the manuscript can be accepted for publication.The Editor-in-Chief will supply the necessary forms for this transfer.Such a written transfer of copyright,which previously was assumed to be implicit in the act of submitting a manuscript,is necessary under the U.S.Copyright Law in order for the publisher to carry through the dissemination of research results and reviews as widely and effective as possible.

683

4. The paper starts with the title of the article, author's name(s) (no titles or degrees), author's affiliation(s) and e-mail addresses. The affiliation should comprise the department, institution (usually university or company), city, state (and/or nation) and mail code. The following items, 5 and 6, should be on page no. 1 of the paper. 5. An abstract is to be provided, preferably no longer than 150 words. 6. A list of 5 key words is to be provided directly below the abstract. Key words should express the precise content of the manuscript, as they are used for indexing purposes. The main body of the paper should begin on page no. 1, if possible. 7. All sections should be numbered with Arabic numerals (such as: 1. INTRODUCTION) . Subsections should be identified with section and subsection numbers (such as 6.1. Second-Value Subheading). If applicable, an independent single-number system (one for each category) should be used to label all theorems, lemmas, propositions, corrolaries, definitions, remarks, examples, etc. The label (such as Lemma 7) should be typed with paragraph indentation, followed by a period and the lemma itself. 8. Mathematical notation must be typeset. Equations should be numbered consecutively with Arabic numerals in parentheses placed flush right,and should be thusly referred to in the text [such as Eqs.(2) and (5)]. The running title must be placed at the top of even numbered pages and the first author's name, et al., must be placed at the top of the odd numbed pages. 9. Illustrations (photographs, drawings, diagrams, and charts) are to be numbered in one consecutive series of Arabic numerals. The captions for illustrations should be typed double space. All illustrations, charts, tables, etc., must be embedded in the body of the manuscript in proper, final, print position. In particular, manuscript, source, and PDF file version must be at camera ready stage for publication or they cannot be considered. Tables are to be numbered (with Roman numerals) and referred to by number in the text. Center the title above the table, and type explanatory footnotes (indicated by superscript lowercase letters) below the table. 10. List references alphabetically at the end of the paper and number them consecutively. Each must be cited in the text by the appropriate Arabic numeral in square brackets on the baseline. References should include (in the following order): initials of first and middle name, last name of author(s) title of article,

684

name of publication, volume number, inclusive pages, and year of publication. Authors should follow these examples: Journal Article 1. H.H.Gonska,Degree of simultaneous approximation of bivariate functions by Gordon operators, (journal name in italics) J. Approx. Theory, 62,170-191(1990).

Book 2. G.G.Lorentz, (title of book in italics) Bernstein Polynomials (2nd ed.), Chelsea,New York,1986.

Contribution to a Book 3. M.K.Khan, Approximation properties of beta operators,in(title of book in italics) Progress in Approximation Theory (P.Nevai and A.Pinkus,eds.), Academic Press, New York,1991,pp.483-495.

11. All acknowledgements (including those for a grant and financial support) should occur in one paragraph that directly precedes the References section. 12. Footnotes should be avoided. When their use is absolutely necessary, footnotes should be numbered consecutively using Arabic numerals and should be typed at the bottom of the page to which they refer. Place a line above the footnote, so that it is set off from the text. Use the appropriate superscript numeral for citation in the text. 13. After each revision is made please again submit three hard copies of the revised manuscript, including in the final one. And after a manuscript has been accepted for publication and with all revisions incorporated, manuscripts, including the TEX/LaTex source file and the PDF file, are to be submitted to the Editor's Office on a personal-computer disk, 3.5 inch size. Label the disk with clearly written identifying information and properly ship, such as: Your name, title of article, kind of computer used, kind of software and version number, disk format and files names of article, as well as abbreviated journal name. Package the disk in a disk mailer or protective cardboard. Make sure contents of disks are identical with the ones of final hard copies submitted! Note: The Editor's Office cannot accept the disk without the accompanying matching hard copies of manuscript. No e-mail final submissions are allowed! The disk submission must be used.

14. Effective 1 Nov. 2009 for current journal page charges, contact the Editor in Chief. Upon acceptance of the paper an invoice will be sent to the contact author. The fee payment will be due one month from the invoice date. The article will proceed to publication only after the fee is paid. The charges are to be sent, by money order or certified check, in US dollars, payable to Eudoxus Press, LLC, to the address shown on

685

the Eudoxus homepage. No galleys will be sent and the contact author will receive one(1) electronic copy of the journal issue in which the article appears. 15. This journal will consider for publication only papers that contain proofs for their listed results.

 

686

TABLE OF CONTENTS, JOURNAL OF CONCRETE AND APPLICABLE MATHEMATICS, VOL. 8, NO. 4, 2010

On the numerical solution of nonlinear delay differential equations, S. Karimi Vanani, A. Aminataei,………………………………………………………………………………....568 Inclusion Theorems for Absolute Matrix Summability Methods, W. T. Sulaiman,….577 Integral Inequalities Concerning Triple Integrals, W. T. Sulaiman,………………….585 On some inequalities for Concave Functions, W. T. Sulaiman,………………………..594 Recurrence relation with binomial coefficient, George Grossman, Aklilu Zeleke, Xinyun Zhu,……………………………………………………………………………….602 On the q-extension of Genocchi polynomials, C. S. Ryoo,……………………………...616 On Best Simultaneous Approximation in Semi Metric Spaces, H. K. Pathak and Satyaj Tiwari, ……………………………………………………………………………………..623 On best uniform approximation of periodic functions by trigonometric polynomials, Michael I. Ganzburg,……………………………………………………………………...631 Some convergence theorems for a class of generalized Φ-hemicontractive mappings, Chang He Xiang, Zhe Chen, Ke Quan Zhao,…………………………………………….............638 Iterative algorithms for a countable family of nonexpansive mappings, Yisheng Song, Xiao Liu,………………………………………………………………………………………….645 Statistical Convergence and Statistical Core of Sequences of Bounded Linear Operators, A. Gökhan,………………………………………………………………………………….656 The flow of a liquid with cavitation, Ivan Straskraba, Emil Vitasek,…………………..668