Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Gravity Probe B Mission Jan 14, 2005
Message
De
15/01/2005 22:37:00
Hilmar Zonneveld
Independent Consultant
Cochabamba, Bolivie
 
Information générale
Forum:
Politics
Catégorie:
Autre
Divers
Thread ID:
00977540
Message ID:
00977541
Vues:
13
This time, I found the section about data compression quite interesting.

>Hi,
>
>Here is the update of the Gravity Probe B mission for Jan 14, 2005.
>
>#-------------------------------------------
>
>==============================================
>GRAVITY PROBE B MISSION UPDATE FOR 14 JANUARY 2005
>==============================================
>
>On mission day #269, the spacecraft is in excellent health, with all
>subsystems performing well. The GP-B spacecraft is flying drag-free
>around gyro #3, maintaining a constant roll rate of 0.7742 rpm (77.5
>seconds per revolution.) All four gyros are digitally suspended in
>science mode. The temperature inside the Dewar is holding steady at
>1.82 kelvin. We have been collecting science data for 20 weeks, just
>under halfway through the science phase of the mission. The data
>collection process continues to proceed smoothly, and the quality of
>the data remains excellent.
>
>In this week's update, we describe the systems and process that we
>use to collect both status and scientific data from the spacecraft.
>In an upcoming highlight, we will describe how we use safemodes to
>enable the spacecraft to automatically protect itself when anomalies
>occur on-board, and in another upcoming highlight, we will provide a
>general description of our data reduction and analysis process. These
>highlights will help to provide an understanding of why events like
>proton strikes from solar radiation have not significantly affected
>our experimental results. The information about telemetry and data
>collection in this week's highlights was provided by GP-B Data
>Processing Lead and Webmaster, Jennifer Spencer.
>
>OVERVIEW OF DATA COLLECTION AND TELEMETRY
>===================================
>Our GP-B spacecraft autonomously collects data-in real time--from
>over 9,000 sensors or monitors. On-board the spacecraft there is
>memory bank, called a solid-state recorder (SSR), which has the
>capacity to hold about 15 hours of spacecraft data-both system status
>data and science data. The spacecraft does not communicate directly
>with the GP-B Mission Operations Center (MOC) here at Stanford.
>Rather, it communicates with a network of NASA telemetry satellites,
>called TDRSS (Tracking and Data Relay Satellite System), and with
>NASA ground tracking stations.
>
>Many spacecraft share these NASA telemetry facilities, so GP-B must
>schedule time to communicate with them. These scheduled spacecraft
>communication sessions are called "passes," and the GP-B spacecraft
>typically completes 6-10 TDRSS passes and 4 ground station passes
>each day. During communications passes, commands are relayed to the
>spacecraft from the MOC, and data is relayed back via the satellites,
>ground stations, and NASA data processing facilities. The TDRSS links
>have a relatively slow data rate, so we can only collect spacecraft
>status data and send commands during TDRSS passes. We collect science
>data during the ground passes. That's the "big picture." Following is
>a more detailed look at the various data collection and communication
>systems described above.
>
>THE SOLID STATE RECORDER (SSR)
>=========================
>An SSR is basically a bank of Random Access Memory (RAM) boards, used
>on-board spacecraft to collect and store data. It is typically a
>stand-alone "black box," containing multiple memory boards and
>controlling electronics that provides management of data, fault
>tolerance, and error detection and correction. The SSR on-board the
>GP-B spacecraft has approximately 185 MB of memory-enough to hold ~
>15.33 hours of spacecraft data. This is not enough memory to hold all
>of the data generated by the various monitors, so the GP-B Mission
>Operations staff controls what data is collected at any given time,
>through commands sent to the spacecraft.
>
>Many instruments on-board the spacecraft have their own memory banks.
>Data rates from these instruments vary-most send data every 0.1
>second, but some are faster and others slower. The data from all of
>these instruments is collected by the primary data bus (communication
>path) and sent to the central computer, called the CCCA. The CCCA
>then sends the data to the SSR.
>
>The data itself is categorized into five subtypes:
>1. Sensor programmable telemetry--High data rate of 0.1 seconds,
>greater than 9000 monitors, mostly used for science &
>engineering)--this is GP-B's "primary" useful data, including most
>science data.
>
>2. Event data--For example, whether the vehicle is in eclipse (tells
>when we entered eclipse behind the earth and when we emerged)
>
>3. Database readouts--Used to confirm that the on-board database is
>the same as the ground folks think it is - use this to verify that
>say, filter setting commands, were received and enacted.
>
>4. Memory readout (MRO)--Used to ensure that the binary memory
>on-board is the same (error free) memory we think it is--this is
>where single & multibit errors occur (this is not collected data, but
>only programmable processes-i.e., the spacecraft's Operating System).
>If we find errors in the MROs, we can re-load the memory. Solar wind
>(proton hits), for example, can cause errors here.
>
>5. Snapshot data--This is extremely high-speed data (1/200th of a
>second) from the SQUID (Super-Conducting Quantum Interference
>Device), Telescope and Gyro readout systems. The CCCA does some
>on-board data reduction, performing Fast Fourier Transforms (FFT) on
>some of the incoming SRE data. This on-board reduction is necessary,
>because we do not have room in our SSR, nor the telemetry bandwidth,
>to relay the high-rate data back to the MOC all the time. (Perhaps we
>should upgrade to DSLŠ) However, like all numerical analysis
>methodology, an FFT can become "lost" because an FFT is not always
>performed from the same starting abscissa (x-axis) value. The
>Snapshot allows us to see some of the original data sets being used
>for the FFTs and confirm that they are not lost--or if they are, we
>can fix them by making a programmed adjustment on-board. Other
>systems--the Telescope Readout (TRE) and Gyro Suspension System
>(GSS)--use their snapshots for similar instrumentation and data
>reduction validity checks.
>
>Data is stored to the SSR in a "First in, First out" queue. Once the
>memory is 15.33 hours full, the new data begins to overwrite the
>oldest data collected. That's why it's a good idea to dump the SSR
>to the ground at least every 12 -15 hours (for safety!)Šwhich is
>exactly what we do.
>
>TELEMETRY--TDRSS AND GROUND STATIONS
>================================
>So, how do we talk to the SV and get our data? NASA communicates
>with its spacecraft in several ways. Its highest availability is
>through TDRSS, two communications satellites in orbit just waiting
>for communications. These two satellites can't handle much data at a
>time, and we only transmit to them at a 1K or 2K (kilobit per second)
>rate. For GP-B, this rate is only enough to exchange status
>information and commands, not for science. NASA also uses
>ground-based stations. There are several around the world, but each
>ground station network used is determined by satellite type and
>orbit. Because GP-B is in a polar orbit communicating primarily at
>32K (32kilobits per second), we use the NASA Goddard "Ground
>Network". This network includes stations in Poker Flats, Alaska;
>Wallops, Virginia; Svalbard, Norway; and McMurdo station, Antarctica.
>We haven't used the South Pole station yet, but someday we may need
>to.
>
>Usually, we schedule a ground pass at one of the four ground stations
>every six hours or so. We talk to TDRSS about six times per day.
>Communications must be scheduled and arranged, and like all
>international calls, these communications passes are not cheap!
>Interspatial (satellite-to-satellite) calls are most expensive, and
>ground-to-space calls are slightly less costly. We could talk to
>TDRSS and the ground stations more often, but it costs money, so we
>follow our regular schedule unless there's an emergency. During
>safemodes or other anomalies, we schedule extra TDRSS and ground
>passes as needed. We are not in contact with the spacecraft at all
>times.
>
>While streaming 32K SSR data to the ground, we cannot record to the
>SSR. Our data collected by the sensors during the transmission is
>therefore enfolded into the transmission, bypassing the SSR. We have
>to change antennas (switching from forward to aft antenna) midway
>through each ground pass. During this antenna change, we lose about
>30 seconds of the real-time data because it's being "beamed into
>space". Our data capture on the mission thus far is 99.01%, and our
>spec from NASA was 90%, so we are doing just fine in terms of data
>capture, despite the antenna switches. It takes about 12 minutes to
>collect the entire content of the SSR.
>
>RELAYING DATA TO THE GP-B MOC AT STANFORD
>===================================
>When the spacecraft data goes through TDRSS, it is transmitted
>directly to the Stanford University GP-B MOC through our data link
>with NASA, in real time. We record it on our computers in a format
>similar to the ground station data, and then send it through our data
>processing center.
>
>However, data relayed though ground stations goes through an
>intermediate step, before it is sent to our Mission Operations
>Center. When data arrives at a ground station, 32 byte headers are
>put on each data packet to identify it. The identifiers include our
>spacecraft ID, the ground receipt time, whether or not Reed-Solomon
>encoding was successfully navigated, the ground station ID, and
>several error correction checks. The data is stored at the local
>station and a copy is sent to a central NASA station. After it
>passes transmission error checks at NASA, it comes to us at Stanford
>University. The average 15-hour file takes between 1.5 and 4 hours
>to arrive here. (If you are wondering about Reed-Solomon encoding,
>see http://direct.xilinx.com/bvdocs/whitepapers/wp110.pdf.)
>
>UNCOMPRESSING AND FORMATTING THE DATA
>=================================
>Once the data is here at Stanford, a laborious process begins.
>Spacecraft data, by its very nature, must be highly compressed so
>that as much data as possible can be stored. While we have over
>9,000 monitors on-board, we cannot sample and store more than about
>5,500 of them at a time. That's why our telemetry is
>"programmable"--we can choose what data we want to beam down.
>However, in order to get as much data as possible, we compress it
>highly.
>
>The data is stored in binary format, and the format includes several
>complexities and codes to indicate the states of more complex
>monitors. For example, we might encode the following logic in the
>data: "if bit A=0, then interpret bits B and C in a certain way; but
>if bit A=1, then use a very different filter with bits B and C." The
>data is replete with this kind of logic. In order to decompress and
>decode all of this logic, we use a complex map. Our software first
>separates the data into its five types (described above). Then, type
>one undergoes "decommutation." Once all data is translated into
>standard text format and decommutated, it is stored in our vast
>database (over one terabyte). This is the data in its most useful,
>but still "raw" form; we call this "Level 1 data." It takes about an
>hour to process 12 hours of spacecraft data. It is the Data
>Processing team's job to monitor this process, making sure files
>arrive intact, and unraveling any data snarls that may come from
>ground pass issues.
>
>Our science team takes the Level 1 data, filters it, factors in
>ephemeris information, and other interesting daily information (solar
>activity, etc). The science team also performs several important
>"pre-processing" steps on the data that will be described in a future
>highlight. Once that initial science process is complete, that data
>is stored in the "Level 2" database. From there, more sophisticated
>analysis can be performed.
>
>--
>**********************************
>NASA - Stanford - Lockheed Martin
> Gravity Probe B Program
>"Testing Einstein's Universe"
> http://einstein.stanford.edu
>
>Bob Kahn
>Public Affairs Coordinator
>
>Phone: 650-723-2540
>Fax: 650-723-3494
>Email: kahn@relgyro.stanford.edu
>**********************************
>
>#-------------------------------------------
>
>Regards,
>
>LelandJ
Difference in opinions hath cost many millions of lives: for instance, whether flesh be bread, or bread be flesh; whether whistling be a vice or a virtue; whether it be better to kiss a post, or throw it into the fire... (from Gulliver's Travels)
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform